Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...
USDA-ARS?s Scientific Manuscript database
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
ERIC Educational Resources Information Center
Freudenthal, Daniel: Pine, Julian; Gobet, Fernando
2010-01-01
In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors…
Systems analysis of the single photon response in invertebrate photoreceptors.
Pumir, Alain; Graves, Jennifer; Ranganathan, Rama; Shraiman, Boris I
2008-07-29
Photoreceptors of Drosophila compound eye employ a G protein-mediated signaling pathway that transduces single photons into transient electrical responses called "quantum bumps" (QB). Although most of the molecular components of this pathway are already known, the system-level understanding of the mechanism of QB generation has remained elusive. Here, we present a quantitative model explaining how QBs emerge from stochastic nonlinear dynamics of the signaling cascade. The model shows that the cascade acts as an "integrate and fire" device and explains how photoreceptors achieve reliable responses to light although keeping low background in the dark. The model predicts the nontrivial behavior of mutants that enhance or suppress signaling and explains the dependence on external calcium, which controls feedback regulation. The results provide insight into physiological questions such as single-photon response efficiency and the adaptation of response to high incident-light level. The system-level analysis enabled by modeling phototransduction provides a foundation for understanding G protein signaling pathways less amenable to quantitative approaches.
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
Mallon, Dermot H; Bradley, J Andrew; Winn, Peter J; Taylor, Craig J; Kosmoliaptsis, Vasilis
2015-02-01
We have previously shown that qualitative assessment of surface electrostatic potential of HLA class I molecules helps explain serological patterns of alloantibody binding. We have now used a novel computational approach to quantitate differences in surface electrostatic potential of HLA B-cell epitopes and applied this to explain HLA Bw4 and Bw6 antigenicity. Protein structure models of HLA class I alleles expressing either the Bw4 or Bw6 epitope (defined by sequence motifs at positions 77 to 83) were generated using comparative structure prediction. The electrostatic potential in 3-dimensional space encompassing the Bw4/Bw6 epitope was computed by solving the Poisson-Boltzmann equation and quantitatively compared in a pairwise, all-versus-all fashion to produce distance matrices that cluster epitopes with similar electrostatics properties. Quantitative comparison of surface electrostatic potential at the carboxyl terminal of the α1-helix of HLA class I alleles, corresponding to amino acid sequence motif 77 to 83, produced clustering of HLA molecules in 3 principal groups according to Bw4 or Bw6 epitope expression. Remarkably, quantitative differences in electrostatic potential reflected known patterns of serological reactivity better than Bw4/Bw6 amino acid sequence motifs. Quantitative assessment of epitope electrostatic potential allowed the impact of known amino acid substitutions (HLA-B*07:02 R79G, R82L, G83R) that are critical for antibody binding to be predicted. We describe a novel approach for quantitating differences in HLA B-cell epitope electrostatic potential. Proof of principle is provided that this approach enables better assessment of HLA epitope antigenicity than amino acid sequence data alone, and it may allow prediction of HLA immunogenicity.
Lorenz, Kim; Cohen, Barak A.
2012-01-01
Quantitative trait loci (QTL) with small effects on phenotypic variation can be difficult to detect and analyze. Because of this a large fraction of the genetic architecture of many complex traits is not well understood. Here we use sporulation efficiency in Saccharomyces cerevisiae as a model complex trait to identify and study small-effect QTL. In crosses where the large-effect quantitative trait nucleotides (QTN) have been genetically fixed we identify small-effect QTL that explain approximately half of the remaining variation not explained by the major effects. We find that small-effect QTL are often physically linked to large-effect QTL and that there are extensive genetic interactions between small- and large-effect QTL. A more complete understanding of quantitative traits will require a better understanding of the numbers, effect sizes, and genetic interactions of small-effect QTL. PMID:22942125
Hierarchical and coupling model of factors influencing vessel traffic flow.
Liu, Zhao; Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi
2017-01-01
Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.
Hierarchical and coupling model of factors influencing vessel traffic flow
Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi
2017-01-01
Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system. PMID:28414747
Quantitative Testing of Bedrock Incision Models, Clearwater River, WA
NASA Astrophysics Data System (ADS)
Tomkin, J. H.; Brandon, M.; Pazzaglia, F.; Barbour, J.; Willet, S.
2001-12-01
The topographic evolution of many active orogens is dominated by the process of bedrock channel incision. Several incision models based around the detachment limited shear-stress model (or stream power model) which employs an area (A) and slope (S) power law (E = K Sn Am) have been proposed to explain this process. They require quantitative assessment. We evaluate the proposed incision models by comparing their predictions with observations obtained from a river in a tectonically active mountain range: the Clearwater River in northwestern Washington State. Previous work on river terraces along the Clearwater have provided long-term incision rates for the river, and in conjunction with previous fission track studies it has also been determined that there is a long-term balance between river incision and rock uplift. This steady-state incision rate data allows us, through the use of inversion methods and statistical tests, to determine the applicability of the different incision models for the Clearwater. None of the models successfully explain the observations. This conclusion particularly applies to the commonly used detachment limited shear-stress model (or stream power model), which has a physically implausible best fit solution and systematic residuals for all the predicted combinations of m and n.
Binaural signal detection - Equalization and cancellation theory.
NASA Technical Reports Server (NTRS)
Durlach, N. I.
1972-01-01
The improvement in masked-signal detection afforded by two ears (i.e., binaural unmasking) is explained on the basis of a descriptive model of the processing of binaural stimuli by a system consisting of two bandpass filters, an equalization and cancellation mechanism, and a decision device. The main ideas of the model are initially explained, and a general equation is derived for the purpose of making quantitative predictions. Comparisons are then made between various special cases of this equation and experimental data. Failures of the preliminary model in predicting the data are considered, and possible revisions are discussed.
Reverse engineering systems models of regulation: discovery, prediction and mechanisms.
Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S
2012-08-01
Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Transdisciplinary application of the cross-scale resilience model
Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.
2014-01-01
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.
ERIC Educational Resources Information Center
Reike, Dennis; Schwarz, Wolf
2016-01-01
The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…
Lenné, Thomas; Garvey, Christopher J; Koster, Karen L; Bryant, Gary
2009-02-26
We present an X-ray scattering study of the effects of dehydration on the bilayer and chain-chain repeat spacings of dipalmitoylphosphatidylcholine bilayers in the presence of sugars. The presence of sugars has no effect on the average spacing between the phospholipid chains in either the fluid or gel phase. Using this finding, we establish that for low sugar concentrations only a small amount of sugar exclusion occurs. Under these conditions, the effects of sugars on the membrane transition temperatures can be explained quantitatively by the reduction in hydration repulsion between bilayers due to the presence of the sugars. Specific bonding of sugars to lipid headgroups is not required to explain this effect.
A Didactic Experiment and Model of a Flat-Plate Solar Collector
ERIC Educational Resources Information Center
Gallitto, Aurelio Agliolo; Fiordilino, Emilio
2011-01-01
We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…
Nargotra, Amit; Sharma, Sujata; Koul, Jawahir Lal; Sangwan, Pyare Lal; Khan, Inshad Ali; Kumar, Ashwani; Taneja, Subhash Chander; Koul, Surrinder
2009-10-01
Quantitative structure activity relationship (QSAR) analysis of piperine analogs as inhibitors of efflux pump NorA from Staphylococcus aureus has been performed in order to obtain a highly accurate model enabling prediction of inhibition of S. aureus NorA of new chemical entities from natural sources as well as synthetic ones. Algorithm based on genetic function approximation method of variable selection in Cerius2 was used to generate the model. Among several types of descriptors viz., topological, spatial, thermodynamic, information content and E-state indices that were considered in generating the QSAR model, three descriptors such as partial negative surface area of the compounds, area of the molecular shadow in the XZ plane and heat of formation of the molecules resulted in a statistically significant model with r(2)=0.962 and cross-validation parameter q(2)=0.917. The validation of the QSAR models was done by cross-validation, leave-25%-out and external test set prediction. The theoretical approach indicates that the increase in the exposed partial negative surface area increases the inhibitory activity of the compound against NorA whereas the area of the molecular shadow in the XZ plane is inversely proportional to the inhibitory activity. This model also explains the relationship of the heat of formation of the compound with the inhibitory activity. The model is not only able to predict the activity of new compounds but also explains the important regions in the molecules in quantitative manner.
Nakada, Tomohisa; Kudo, Toshiyuki; Kume, Toshiyuki; Kusuhara, Hiroyuki; Ito, Kiyomi
2018-02-01
Serum creatinine (SCr) levels rise during trimethoprim therapy for infectious diseases. This study aimed to investigate whether the elevation of SCr can be quantitatively explained using a physiologically-based pharmacokinetic (PBPK) model incorporating inhibition by trimethoprim on tubular secretion of creatinine via renal transporters such as organic cation transporter 2 (OCT2), OCT3, multidrug and toxin extrusion protein 1 (MATE1), and MATE2-K. Firstly, pharmacokinetic parameters in the PBPK model of trimethoprim were determined to reproduce the blood concentration profile after a single intravenous and oral administration of trimethoprim in healthy subjects. The model was verified with datasets of both cumulative urinary excretions after a single administration and the blood concentration profile after repeated oral administration. The pharmacokinetic model of creatinine consisted of the creatinine synthesis rate, distribution volume, and creatinine clearance (CL cre ), including tubular secretion via each transporter. When combining the models for trimethoprim and creatinine, the predicted increments in SCr from baseline were 29.0%, 39.5%, and 25.8% at trimethoprim dosages of 5 mg/kg (b.i.d.), 5 mg/kg (q.i.d.), and 200 mg (b.i.d.), respectively, which were comparable with the observed values. The present model analysis enabled us to quantitatively explain increments in SCr during trimethoprim treatment by its inhibition of renal transporters. Copyright © 2017 The Japanese Society for the Study of Xenobiotics. Published by Elsevier Ltd. All rights reserved.
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Zhang, Yonghe
2010-01-01
Ionocovalency (IC), a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table. PMID:21151444
NASA Technical Reports Server (NTRS)
Weaver, David
2008-01-01
Effectively communicate qualitative and quantitative information orally and in writing. Explain the application of fundamental physical principles to various physical phenomena. Apply appropriate problem-solving techniques to practical and meaningful problems using graphical, mathematical, and written modeling tools. Work effectively in collaborative groups.
Moloney, Niamh; Beales, Darren; Azoory, Roxanne; Hübscher, Markus; Waller, Robert; Gibbons, Rebekah; Rebbeck, Trudy
2018-06-14
Pain sensitivity and psychosocial issues are prognostic of poor outcome in acute neck disorders. However, knowledge of associations between pain sensitivity and ongoing pain and disability in chronic neck pain are lacking. We aimed to investigate associations of pain sensitivity with pain and disability at the 12-month follow-up in people with chronic neck pain. The predictor variables were: clinical and quantitative sensory testing (cold, pressure); neural tissue sensitivity; neuropathic symptoms; comorbidities; sleep; psychological distress; pain catastrophizing; pain intensity (for the model explaining disability at 12 months only); and disability (for the model explaining pain at 12 months only). Data were analysed using uni- and multivariate regression models to assess associations with pain and disability at the 12-month follow-up (n = 64 at baseline, n = 51 at follow-up). Univariable associations between all predictor variables and pain and disability were evident (r > 0.3; p < 0.05), except for cold and pressure pain thresholds and cold sensitivity. For disability at the 12-month follow-up, 24.0% of the variance was explained by psychological distress and comorbidities. For pain at 12 months, 39.8% of the variance was explained primarily by baseline disability. Neither clinical nor quantitative measures of pain sensitivity were meaningfully associated with long-term patient-reported outcomes in people with chronic neck pain, limiting their clinical application in evaluating prognosis. Copyright © 2018 John Wiley & Sons, Ltd.
Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J
2009-01-01
Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Tsai, V.C.
2011-01-01
It is known that GPS time series contain a seasonal variation that is not due to tectonic motions, and it has recently been shown that crustal seismic velocities may also vary seasonally. In order to explain these changes, a number of hypotheses have been given, among which thermoelastic and hydrology-induced stresses and strains are leading candidates. Unfortunately, though, since a general framework does not exist for understanding such seasonal variations, it is currently not possible to quickly evaluate the plausibility of these hypotheses. To fill this gap in the literature, I generalize a two-dimensional thermoelastic strain model to provide an analytic solution for the displacements and wave speed changes due to either thermoelastic stresses or hydrologic loading, which consists of poroelastic stresses and purely elastic stresses. The thermoelastic model assumes a periodic surface temperature, and the hydrologic models similarly assume a periodic near-surface water load. Since all three models are two-dimensional and periodic, they are expected to only approximate any realistic scenario; but the models nonetheless provide a quantitative framework for estimating the effects of thermoelastic and hydrologic variations. Quantitative comparison between the models and observations is further complicated by the large uncertainty in some of the relevant parameters. Despite this uncertainty, though, I find that maximum realistic thermoelastic effects are unlikely to explain a large fraction of the observed annual variation in a typical GPS displacement time series or of the observed annual variations in seismic wave speeds in southern California. Hydrologic loading, on the other hand, may be able to explain a larger fraction of both the annual variations in displacements and seismic wave speeds. Neither model is likely to explain all of the seismic wave speed variations inferred from observations. However, more definitive conclusions cannot be made until the model parameters are better constrained. Copyright ?? 2011 by the American Geophysical Union.
Thermoplastic matrix composite processing model
NASA Technical Reports Server (NTRS)
Dara, P. H.; Loos, A. C.
1985-01-01
The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
Hu, Li; Tian, Xiaorui; Huang, Yingzhou; Fang, Liang; Fang, Yurui
2016-02-14
Plasmonic chirality has drawn much attention because of tunable circular dichroism (CD) and the enhancement for chiral molecule signals. Although various mechanisms have been proposed to explain the plasmonic CD, a quantitative explanation like the ab initio mechanism for chiral molecules, is still unavailable. In this study, a mechanism similar to the mechanisms associated with chiral molecules was analyzed. The giant extrinsic circular dichroism of a plasmonic splitting rectangle ring was quantitatively investigated from a theoretical standpoint. The interplay of the electric and magnetic modes of the meta-structure is proposed to explain the giant CD. We analyzed the interplay using both an analytical coupled electric-magnetic dipole model and a finite element method model. The surface charge distributions showed that the circular current yielded by the splitting rectangle ring causes the ring to behave like a magneton at some resonant modes, which then interact with the electric modes, resulting in a mixing of the two types of modes. The strong interplay of the two mode types is primarily responsible for the giant CD. The analysis of the chiral near-field of the structure shows potential applications for chiral molecule sensing.
A model of comprehensive unification
NASA Astrophysics Data System (ADS)
Reig, Mario; Valle, José W. F.; Vaquera-Araujo, C. A.; Wilczek, Frank
2017-11-01
Comprehensive - that is, gauge and family - unification using spinors has many attractive features, but it has been challenged to explain chirality. Here, by combining an orbifold construction with more traditional ideas, we address that difficulty. Our candidate model features three chiral families and leads to an acceptable result for quantitative unification of couplings. A potential target for accelerator and astronomical searches emerges.
ERIC Educational Resources Information Center
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio
2002-05-01
This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.
A quantification model for the structure of clay materials.
Tang, Liansheng; Sang, Haitao; Chen, Haokun; Sun, Yinlei; Zhang, Longjian
2016-07-04
In this paper, the quantification for clay structure is explicitly explained, and the approach and goals of quantification are also discussed. The authors consider that the purpose of the quantification for clay structure is to determine some parameters that can be used to quantitatively characterize the impact of clay structure on the macro-mechanical behaviour. According to the system theory and the law of energy conservation, a quantification model for the structure characteristics of clay materials is established and three quantitative parameters (i.e., deformation structure potential, strength structure potential and comprehensive structure potential) are proposed. And the corresponding tests are conducted. The experimental results show that these quantitative parameters can accurately reflect the influence of clay structure on the deformation behaviour, strength behaviour and the relative magnitude of structural influence on the above two quantitative parameters, respectively. These quantitative parameters have explicit mechanical meanings, and can be used to characterize the structural influences of clay on its mechanical behaviour.
Random Wiring, Ganglion Cell Mosaics, and the Functional Architecture of the Visual Cortex
Coppola, David; White, Leonard E.; Wolf, Fred
2015-01-01
The architecture of iso-orientation domains in the primary visual cortex (V1) of placental carnivores and primates apparently follows species invariant quantitative laws. Dynamical optimization models assuming that neurons coordinate their stimulus preferences throughout cortical circuits linking millions of cells specifically predict these invariants. This might indicate that V1’s intrinsic connectome and its functional architecture adhere to a single optimization principle with high precision and robustness. To validate this hypothesis, it is critical to closely examine the quantitative predictions of alternative candidate theories. Random feedforward wiring within the retino-cortical pathway represents a conceptually appealing alternative to dynamical circuit optimization because random dimension-expanding projections are believed to generically exhibit computationally favorable properties for stimulus representations. Here, we ask whether the quantitative invariants of V1 architecture can be explained as a generic emergent property of random wiring. We generalize and examine the stochastic wiring model proposed by Ringach and coworkers, in which iso-orientation domains in the visual cortex arise through random feedforward connections between semi-regular mosaics of retinal ganglion cells (RGCs) and visual cortical neurons. We derive closed-form expressions for cortical receptive fields and domain layouts predicted by the model for perfectly hexagonal RGC mosaics. Including spatial disorder in the RGC positions considerably changes the domain layout properties as a function of disorder parameters such as position scatter and its correlations across the retina. However, independent of parameter choice, we find that the model predictions substantially deviate from the layout laws of iso-orientation domains observed experimentally. Considering random wiring with the currently most realistic model of RGC mosaic layouts, a pairwise interacting point process, the predicted layouts remain distinct from experimental observations and resemble Gaussian random fields. We conclude that V1 layout invariants are specific quantitative signatures of visual cortical optimization, which cannot be explained by generic random feedforward-wiring models. PMID:26575467
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
NASA Astrophysics Data System (ADS)
Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd
2017-05-01
This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.
Walzthoeni, Thomas; Joachimiak, Lukasz A; Rosenberger, George; Röst, Hannes L; Malmström, Lars; Leitner, Alexander; Frydman, Judith; Aebersold, Ruedi
2015-12-01
Chemical cross-linking in combination with mass spectrometry generates distance restraints of amino acid pairs in close proximity on the surface of native proteins and protein complexes. In this study we used quantitative mass spectrometry and chemical cross-linking to quantify differences in cross-linked peptides obtained from complexes in spatially discrete states. We describe a generic computational pipeline for quantitative cross-linking mass spectrometry consisting of modules for quantitative data extraction and statistical assessment of the obtained results. We used the method to detect conformational changes in two model systems: firefly luciferase and the bovine TRiC complex. Our method discovers and explains the structural heterogeneity of protein complexes using only sparse structural information.
NASA Astrophysics Data System (ADS)
Amano, Ayako; Sakuma, Taisuke; Kazama, So
This study evaluated waterborne infectious diseases risk and incidence rate around Phonm Penh in Cambodia. We use the hydraulic flood simulation, coliform bacterium diffusion model, dose-response model and outpatient data for quantitative analysis. The results obtained are as follows; 1. The incidence (incidence rate) of diarrhea as water borne diseases risk is 0.14 million people (9%) in the inundation area. 2. The residents in the inundation area are exposed up to 4 times as high risk as daily mean calculated by the integrated model combined in the regional scale. 3.The infectious disease risk due to floods and inundation indicated is effective as an element to explain the risk. The scenario explains 34% number of patient estimated by the outpatient data.
An Examination of Achievement Goals in Learning: A Quasi-Quantitative Approach
ERIC Educational Resources Information Center
Phan, Huy P.
2012-01-01
Introduction: The achievement goals framework has been researched and used to explain and account for individuals' learning and academic achievements. Over the past three decades, progress has been made in the conceptualizations and research development of different possible theoretical models of achievement goals. Notably, in this study, we…
Using Algorithms in Solving Synapse Transmission Problems.
ERIC Educational Resources Information Center
Stencel, John E.
1992-01-01
Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…
The Ether Wind and the Global Positioning System.
ERIC Educational Resources Information Center
Muller, Rainer
2000-01-01
Explains how students can perform a refutation of the ether theory using information from the Global Positioning System (GPS). Discusses the functioning of the GPS, qualitatively describes how position determination would be affected by an ether wind, and illustrates the pertinent ideas with a simple quantitative model. (WRM)
Temporal maps and informativeness in associative learning.
Balsam, Peter D; Gallistel, C Randy
2009-02-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla-Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information.
Temporal maps and informativeness in associative learning
Balsam, Peter D; Gallistel, C. Randy
2009-01-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla–Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information. PMID:19136158
Chen, Ran; Riviere, Jim E
2017-01-01
Quantitative analysis of the interactions between nanomaterials and their surrounding environment is crucial for safety evaluation in the application of nanotechnology as well as its development and standardization. In this chapter, we demonstrate the importance of the adsorption of surrounding molecules onto the surface of nanomaterials by forming biocorona and thus impact the bio-identity and fate of those materials. We illustrate the key factors including various physical forces in determining the interaction happening at bio-nano interfaces. We further discuss the mathematical endeavors in explaining and predicting the adsorption phenomena, and propose a new statistics-based surface adsorption model, the Biological Surface Adsorption Index (BSAI), to quantitatively analyze the interaction profile of surface adsorption of a large group of small organic molecules onto nanomaterials with varying surface physicochemical properties, first employing five descriptors representing the surface energy profile of the nanomaterials, then further incorporating traditional semi-empirical adsorption models to address concentration effects of solutes. These Advancements in surface adsorption modelling showed a promising development in the application of quantitative predictive models in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Ray, A.; Key, K.
2017-12-01
Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.
Future Field Programmable Gate Array (FPGA) Design Methodologies and Tool Flows
2008-07-01
a) that the results are accepted by users, vendors, … and (b) that they can quantitatively explain HPC rules of thumb such as: “OpenMP is easier...in productivity that were demonstrated by traditional software systems. Using advances in software productivity as a guide , we have identified three...of this study we developed a productivity model to guide our investigation (14). Models have limitations and the model we propose is no exception
Genetic architecture of resistance in Daphnia hosts against two species of host-specific parasites.
Routtu, J; Ebert, D
2015-02-01
Understanding the genetic architecture of host resistance is key for understanding the evolution of host-parasite interactions. Evolutionary models often assume simple genetics based on few loci and strong epistasis. It is unknown, however, whether these assumptions apply to natural populations. Using a quantitative trait loci (QTL) approach, we explore the genetic architecture of resistance in the crustacean Daphnia magna to two of its natural parasites: the horizontally transmitted bacterium Pasteuria ramosa and the horizontally and vertically transmitted microsporidium Hamiltosporidium tvaerminnensis. These two systems have become models for studies on the evolution of host-parasite interactions. In the QTL panel used here, Daphnia's resistance to P. ramosa is controlled by a single major QTL (which explains 50% of the observed variation). Resistance to H. tvaerminnensis horizontal infections shows a signature of a quantitative trait based in multiple loci with weak epistatic interactions (together explaining 38% variation). Resistance to H. tvaerminnensis vertical infections, however, shows only one QTL (explaining 13.5% variance) that colocalizes with one of the QTLs for horizontal infections. QTLs for resistance to Pasteuria and Hamiltosporidium do not colocalize. We conclude that the genetics of resistance in D. magna are drastically different for these two parasites. Furthermore, we infer that based on these and earlier results, the mechanisms of coevolution differ strongly for the two host-parasite systems. Only the Pasteuria-Daphnia system is expected to follow the negative frequency-dependent selection (Red Queen) model. How coevolution works in the Hamiltosporidium-Daphnia system remains unclear.
Genetic architecture of resistance in Daphnia hosts against two species of host-specific parasites
Routtu, J; Ebert, D
2015-01-01
Understanding the genetic architecture of host resistance is key for understanding the evolution of host–parasite interactions. Evolutionary models often assume simple genetics based on few loci and strong epistasis. It is unknown, however, whether these assumptions apply to natural populations. Using a quantitative trait loci (QTL) approach, we explore the genetic architecture of resistance in the crustacean Daphnia magna to two of its natural parasites: the horizontally transmitted bacterium Pasteuria ramosa and the horizontally and vertically transmitted microsporidium Hamiltosporidium tvaerminnensis. These two systems have become models for studies on the evolution of host–parasite interactions. In the QTL panel used here, Daphnia's resistance to P. ramosa is controlled by a single major QTL (which explains 50% of the observed variation). Resistance to H. tvaerminnensis horizontal infections shows a signature of a quantitative trait based in multiple loci with weak epistatic interactions (together explaining 38% variation). Resistance to H. tvaerminnensis vertical infections, however, shows only one QTL (explaining 13.5% variance) that colocalizes with one of the QTLs for horizontal infections. QTLs for resistance to Pasteuria and Hamiltosporidium do not colocalize. We conclude that the genetics of resistance in D. magna are drastically different for these two parasites. Furthermore, we infer that based on these and earlier results, the mechanisms of coevolution differ strongly for the two host–parasite systems. Only the Pasteuria–Daphnia system is expected to follow the negative frequency-dependent selection (Red Queen) model. How coevolution works in the Hamiltosporidium–Daphnia system remains unclear. PMID:25335558
Polymer Brushes under High Load
Balko, Suzanne M.; Kreer, Torsten; Costanzo, Philip J.; Patten, Tim E.; Johner, Albert; Kuhl, Tonya L.; Marques, Carlos M.
2013-01-01
Polymer coatings are frequently used to provide repulsive forces between surfaces in solution. After 25 years of design and study, a quantitative model to explain and predict repulsion under strong compression is still lacking. Here, we combine experiments, simulations, and theory to study polymer coatings under high loads and demonstrate a validated model for the repulsive forces, proposing that this universal behavior can be predicted from the polymer solution properties. PMID:23516470
Investigating students’ mental models about the nature of light in different contexts
NASA Astrophysics Data System (ADS)
Özcan, Özgür
2015-11-01
In this study, we investigated pre-service physics teachers’ mental models of light in different contexts, such as blackbody radiation, the photoelectric effect and the Compton effect. The data collected through the paper-and-pencil questionnaire (PPQ) were analyzed both quantitatively and qualitatively. Sampling of this study consists of a total of 110 physics education students who were taking a modern physics course at two different state universities in Turkey. As a result, three mental models, which were called the beam ray model (BrM), hybrid model (HM) and particle model (PM), were being used by the students while explaining these phenomena. The most model fluctuation was seen in HM and BrM. In addition, some students were in a mixed-model state where they use multiple mental models in explaining a phenomenon and used these models inconsistently. On the other hand, most of the students who used the particle model can be said to be in a pure model state.
Transdisciplinary Application of Cross-Scale Resilience ...
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems. Comparative analyses of complex systems have, in fact, demonstrated commonalities among distinctly different types of systems (Schneider & Kay 1994; Holling 2001; Lansing 2003; Foster 2005; Bullmore et al. 2009). Both biological and non-biological complex systems appear t
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2013-01-01
Allowing respondents to retake a cognitive ability test has shown to increase their test scores. Several theoretical models have been proposed to explain this effect, which make distinct assumptions regarding the measurement invariance of psychometric tests across test administration sessions with regard to narrower cognitive abilities and general…
Modelling Transposition Latencies: Constraints for Theories of Serial Order Memory
ERIC Educational Resources Information Center
Farrell, Simon; Lewandowsky, Stephan
2004-01-01
Several competing theories of short-term memory can explain serial recall performance at a quantitative level. However, most theories to date have not been applied to the accompanying pattern of response latencies, thus ignoring a rich and highly diagnostic aspect of performance. This article explores and tests the error latency predictions of…
ERIC Educational Resources Information Center
Kieres, Katherine H.; Gutmore, Daniel
2014-01-01
Based on Bass and Riggio's (2006) Augmentation Model of Transactional and Transformational Leadership, this quantitative study sought to identify the amount of variance in teacher job satisfaction and organizational commitment that can be explained by principals' transformational leadership behaviors, above and beyond the influence of…
ERIC Educational Resources Information Center
Kieres, Katherine H.
2013-01-01
Based on Bass and Riggio's (2006) Augmentation Model of Transactional and Transformational Leadership, this quantitative study sought to identify the amount of variance in teacher job satisfaction and organizational commitment that can be explained by principals' transformational leadership behaviors, above and beyond the influence of…
Quiñones, Andrés E.; Pen, Ido
2017-01-01
Explaining the origin of eusociality, with strict division of labour between workers and reproductives, remains one of evolutionary biology’s greatest challenges. Specific combinations of genetic, behavioural and demographic traits in Hymenoptera are thought to explain their relatively high frequency of eusociality, but quantitative models integrating such preadaptations are lacking. Here we use mathematical models to show that the joint evolution of helping behaviour and maternal sex ratio adjustment can synergistically trigger both a behavioural change from solitary to eusocial breeding, and a demographic change from a life cycle with two reproductive broods to a life cycle in which an unmated cohort of female workers precedes a final generation of dispersing reproductives. Specific suits of preadaptations are particularly favourable to the evolution of eusociality: lifetime monogamy, bivoltinism with male generation overlap, hibernation of mated females and haplodiploidy with maternal sex ratio adjustment. The joint effects of these preadaptations may explain the abundance of eusociality in the Hymenoptera and its virtual absence in other haplodiploid lineages. PMID:28643786
High Upward Fluxes of Formic Acid from a Boreal Forest Canopy
NASA Technical Reports Server (NTRS)
Schobesberger, Siegfried; Lopez-Hilifiker, Felipe D.; Taipale, Ditte; Millet, Dylan B.; D'Ambro, Emma L.; Rantala, Pekka; Mammarella, Ivan; Zhou, Putian; Wolfe, Glenn M.; Lee, Ben H.;
2016-01-01
Eddy covariance fluxes of formic acid, HCOOH, were measured over a boreal forest canopy in spring/summer 2014. The HCOOH fluxes were bidirectional but mostly upward during daytime, in contrast to studies elsewhere that reported mostly downward fluxes. Downward flux episodes were explained well by modeled dry deposition rates. The sum of net observed flux and modeled dry deposition yields an upward gross flux of HCOOH, which could not be quantitatively explained by literature estimates of direct vegetative soil emissions nor by efficient chemical production from other volatile organic compounds, suggesting missing or greatly underestimated HCOOH sources in the boreal ecosystem. We implemented a vegetative HCOOH source into the GEOS-Chem chemical transport model to match our derived gross flux and evaluated the updated model against airborne and spaceborne observations. Model biases in the boundary layer were substantially reduced based on this revised treatment, but biases in the free troposphere remain unexplained.
Kaddi, Chanchala D; Niesner, Bradley; Baek, Rena; Jasper, Paul; Pappas, John; Tolsma, John; Li, Jing; van Rijn, Zachary; Tao, Mengdi; Ortemann-Renon, Catherine; Easton, Rachael; Tan, Sharon; Puga, Ana Cristina; Schuchman, Edward H; Barrett, Jeffrey S; Azer, Karim
2018-06-19
Acid sphingomyelinase deficiency (ASMD) is a rare lysosomal storage disorder with heterogeneous clinical manifestations, including hepatosplenomegaly and infiltrative pulmonary disease, and is associated with significant morbidity and mortality. Olipudase alfa (recombinant human acid sphingomyelinase) is an enzyme replacement therapy under development for the non-neurological manifestations of ASMD. We present a quantitative systems pharmacology (QSP) model supporting the clinical development of olipudase alfa. The model is multiscale and mechanistic, linking the enzymatic deficiency driving the disease to molecular-level, cellular-level, and organ-level effects. Model development was informed by natural history, and preclinical and clinical studies. By considering patient-specific pharmacokinetic (PK) profiles and indicators of disease severity, the model describes pharmacodynamic (PD) and clinical end points for individual patients. The ASMD QSP model provides a platform for quantitatively assessing systemic pharmacological effects in adult and pediatric patients, and explaining variability within and across these patient populations, thereby supporting the extrapolation of treatment response from adults to pediatrics. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Kooke, Rik; Kruijer, Willem; Bours, Ralph; Becker, Frank; Kuhn, André; van de Geest, Henri; Buntjer, Jaap; Doeswijk, Timo; Guerra, José; Bouwmeester, Harro; Vreugdenhil, Dick; Keurentjes, Joost J B
2016-04-01
Quantitative traits in plants are controlled by a large number of genes and their interaction with the environment. To disentangle the genetic architecture of such traits, natural variation within species can be explored by studying genotype-phenotype relationships. Genome-wide association studies that link phenotypes to thousands of single nucleotide polymorphism markers are nowadays common practice for such analyses. In many cases, however, the identified individual loci cannot fully explain the heritability estimates, suggesting missing heritability. We analyzed 349 Arabidopsis accessions and found extensive variation and high heritabilities for different morphological traits. The number of significant genome-wide associations was, however, very low. The application of genomic prediction models that take into account the effects of all individual loci may greatly enhance the elucidation of the genetic architecture of quantitative traits in plants. Here, genomic prediction models revealed different genetic architectures for the morphological traits. Integrating genomic prediction and association mapping enabled the assignment of many plausible candidate genes explaining the observed variation. These genes were analyzed for functional and sequence diversity, and good indications that natural allelic variation in many of these genes contributes to phenotypic variation were obtained. For ACS11, an ethylene biosynthesis gene, haplotype differences explaining variation in the ratio of petiole and leaf length could be identified. © 2016 American Society of Plant Biologists. All Rights Reserved.
Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11
NASA Astrophysics Data System (ADS)
Ashton, G.; Jones, D. I.; Prix, R.
2016-05-01
We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.
Mental maps and travel behaviour: meanings and models
NASA Astrophysics Data System (ADS)
Hannes, Els; Kusumastuti, Diana; Espinosa, Maikel León; Janssens, Davy; Vanhoof, Koen; Wets, Geert
2012-04-01
In this paper, the " mental map" concept is positioned with regard to individual travel behaviour to start with. Based on Ogden and Richards' triangle of meaning (The meaning of meaning: a study of the influence of language upon thought and of the science of symbolism. International library of psychology, philosophy and scientific method. Routledge and Kegan Paul, London, 1966) distinct thoughts, referents and symbols originating from different scientific disciplines are identified and explained in order to clear up the notion's fuzziness. Next, the use of this concept in two major areas of research relevant to travel demand modelling is indicated and discussed in detail: spatial cognition and decision-making. The relevance of these constructs to understand and model individual travel behaviour is explained and current research efforts to implement these concepts in travel demand models are addressed. Furthermore, these mental map notions are specified in two types of computational models, i.e. a Bayesian Inference Network (BIN) and a Fuzzy Cognitive Map (FCM). Both models are explained, and a numerical and a real-life example are provided. Both approaches yield a detailed quantitative representation of the mental map of decision-making problems in travel behaviour.
Forest management and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buongiorno, J.; Gilless, J.K.
1987-01-01
This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.
Incorporation of impurity to a tetragonal lysozyme crystal
NASA Astrophysics Data System (ADS)
Kurihara, Kazuo; Miyashita, Satoru; Sazaki, Gen; Nakada, Toshitaka; Durbin, Stephen D.; Komatsu, Hiroshi; Ohba, Tetsuhiko; Ohki, Kazuo
1999-01-01
Concentration of a phosphor-labeled impurity (ovalbumin) incorporated into protein (hen egg white lysozyme) crystals during growth was measured by fluorescence.This technique enabled us to measure the local impurity concentration in a crystal quantitatively. Impurity concentration increased with growth rate, which could not be explained by two conventional models (equilibrium adsorption model and Burton-Prim-Slichter model); a modified model is proposed. Impurity concentration also increased with the pH of the solution. This result is discussed considering the electrostatic interaction between the impurity and the crystallizing species.
ERIC Educational Resources Information Center
Rubenson, Kjell; Desjardins, Richard
2009-01-01
Quantitative and qualitative findings on barriers to participation in adult education are reviewed and some of the defining parameters that may explain observed national differences are considered. A theoretical perspective based on bounded agency is put forth to take account of the interaction between structurally and individually based barriers…
A Quantitative Theory of Human Color Choices
Komarova, Natalia L.; Jameson, Kimberly A.
2013-01-01
The system for colorimetry adopted by the Commission Internationale de l’Eclairage (CIE) in 1931, along with its subsequent improvements, represents a family of light mixture models that has served well for many decades for stimulus specification and reproduction when highly controlled color standards are important. Still, with regard to color appearance many perceptual and cognitive factors are known to contribute to color similarity, and, in general, to all cognitive judgments of color. Using experimentally obtained odd-one-out triad similarity judgments from 52 observers, we demonstrate that CIE-based models can explain a good portion (but not all) of the color similarity data. Color difference quantified by CIELAB ΔE explained behavior at levels of 81% (across all colors), 79% (across red colors), and 66% (across blue colors). We show that the unexplained variation cannot be ascribed to inter- or intra-individual variations among the observers, and points to the presence of additional factors shared by the majority of responders. Based on this, we create a quantitative model of a lexicographic semiorder type, which shows how different perceptual and cognitive influences can trade-off when making color similarity judgments. We show that by incorporating additional influences related to categorical and lightness and saturation factors, the model explains more of the triad similarity behavior, namely, 91% (all colors), 90% (reds), and 87% (blues). We conclude that distance in a CIE model is but the first of several layers in a hierarchy of higher-order cognitive influences that shape color triad choices. We further discuss additional mitigating influences outside the scope of CIE modeling, which can be incorporated in this framework, including well-known influences from language, stimulus set effects, and color preference bias. We also discuss universal and cultural aspects of the model as well as non-uniformity of the color space with respect to different cultural biases. PMID:23409103
Students Explaining Science—Assessment of Science Communication Competence
NASA Astrophysics Data System (ADS)
Kulgemeyer, Christoph; Schecker, Horst
2013-12-01
Science communication competence (SCC) is an important educational goal in the school science curricula of several countries. However, there is a lack of research about the structure and the assessment of SCC. This paper specifies the theoretical framework of SCC by a competence model. We developed a qualitative assessment method for SCC that is based on an expert-novice dialog: an older student (explainer, expert) explains a physics phenomenon to a younger peer (addressee, novice) in a controlled test setting. The explanations are video-recorded and analysed by qualitative content analysis. The method was applied in a study with 46 secondary school students as explainers. Our aims were (a) to evaluate whether our model covers the relevant features of SCC, (b) to validate the assessment method and (c) to find characteristics of addressee-adequate explanations. A performance index was calculated to quantify the explainers' levels of competence on an ordinal scale. We present qualitative and quantitative evidence that the index is adequate for assessment purposes. It correlates with results from a written SCC test and a perspective taking test (convergent validity). Addressee-adequate explanations can be characterized by use of graphical representations and deliberate switches between scientific and everyday language.
Classical and quantum magnetism in giant Keplerate magnetic molecules.
Müller, A; Luban, M; Schröder, C; Modler, R; Kögerler, P; Axenovich, M; Schnack, J; Canfield, P; Bud'ko, S; Harrison, N
2001-09-17
Complementary theoretical modeling methods are presented for the classical and quantum Heisenberg model to explain the magnetic properties of nanometer-sized magnetic molecules. Excellent quantitative agreement is achieved between our experimental data down to 0.1 K and for fields up to 60 Tesla and our theoretical results for the giant Keplerate species {Mo72Fe30}, by far the largest paramagnetic molecule synthesized to date. © 2001 WILEY-VCH Verlag GmbH, Weinheim, Fed. Rep. of Germany.
Background of SAM atom-fraction profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, Frank
Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which ismore » validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.« less
NASA Astrophysics Data System (ADS)
Lykkegaard, Eva; Ulriksen, Lars
2016-03-01
During the past 30 years, Eccles' comprehensive social-psychological Expectancy-Value Model of Motivated Behavioural Choices (EV-MBC model) has been proven suitable for studying educational choices related to Science, Technology, Engineering and/or Mathematics (STEM). The reflections of 15 students in their last year in upper-secondary school concerning their choice of tertiary education were examined using quantitative EV-MBC surveys and repeated qualitative interviews. This article presents the analyses of three cases in detail. The analytical focus was whether the factors indicated in the EV-MBC model could be used to detect significant changes in the students' educational choice processes. An important finding was that the quantitative EV-MBC surveys and the qualitative interviews gave quite different results concerning the students' considerations about the choice of tertiary education, and that significant changes in the students' reflections were not captured by the factors of the EV-MBC model. This questions the validity of the EV-MBC surveys. Moreover, the quantitative factors from the EV-MBC model did not sufficiently explain students' dynamical educational choice processes where students in parallel considered several different potential educational trajectories. We therefore call for further studies of the EV-MBC model's use in describing longitudinal choice processes and especially in investigating significant changes.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
Basal exon skipping and genetic pleiotropy: A predictive model of disease pathogenesis.
Drivas, Theodore G; Wojno, Adam P; Tucker, Budd A; Stone, Edwin M; Bennett, Jean
2015-06-10
Genetic pleiotropy, the phenomenon by which mutations in the same gene result in markedly different disease phenotypes, has proven difficult to explain with traditional models of disease pathogenesis. We have developed a model of pleiotropic disease that explains, through the process of basal exon skipping, how different mutations in the same gene can differentially affect protein production, with the total amount of protein produced correlating with disease severity. Mutations in the centrosomal protein of 290 kDa (CEP290) gene are associated with a spectrum of phenotypically distinct human diseases (the ciliopathies). Molecular biologic examination of CEP290 transcript and protein expression in cells from patients carrying CEP290 mutations, measured by quantitative polymerase chain reaction and Western blotting, correlated with disease severity and corroborated our model. We show that basal exon skipping may be the mechanism underlying the disease pleiotropy caused by CEP290 mutations. Applying our model to a different disease gene, CC2D2A (coiled-coil and C2 domains-containing protein 2A), we found that the same correlations held true. Our model explains the phenotypic diversity of two different inherited ciliopathies and may establish a new model for the pathogenesis of other pleiotropic human diseases. Copyright © 2015, American Association for the Advancement of Science.
When More of A Doesn't Result in More of B: Physics Experiments with a Surprising Outcome
ERIC Educational Resources Information Center
Tsakmaki, Paraskevi; Koumaras, Panagiotis
2016-01-01
Science education research has shown that students use causal reasoning, particularly the model "agent--instrument--object," to explain or predict the outcome of many natural situations. Students' reasoning seems to be based on a small set of few intuitive rules. One of these rules quantitatively correlates the outcome of an experiment…
ERIC Educational Resources Information Center
Thien, Lei Mee; Razak, Nordin Abd
2013-01-01
This study aims to examine an untested research model that explains the direct- and indirect influences of Academic Coping, Friendship Quality, and Student Engagement on Student Quality of School Life. This study employed the quantitative-based cross-sectional survey method. The sample consisted of 2400 Malaysian secondary Form Four students…
Hutnak, M.; Hurwitz, S.; Ingebritsen, S.E.; Hsieh, P.A.
2009-01-01
Ground surface displacement (GSD) in large calderas is often interpreted as resulting from magma intrusion at depth. Recent advances in geodetic measurements of GSD, notably interferometric synthetic aperture radar, reveal complex and multifaceted deformation patterns that often require complex source models to explain the observed GSD. Although hydrothermal fluids have been discussed as a possible deformation agent, very few quantitative studies addressing the effects of multiphase flow on crustal mechanics have been attempted. Recent increases in the power and availability of computing resources allow robust quantitative assessment of the complex time-variant thermal interplay between aqueous fluid flow and crustal deformation. We carry out numerical simulations of multiphase (liquid-gas), multicomponent (H 2O-CO2) hydrothermal fluid flow and poroelastic deformation using a range of realistic physical parameters and processes. Hydrothermal fluid injection, circulation, and gas formation can generate complex, temporally and spatially varying patterns of GSD, with deformation rates, magnitudes, and geometries (including subsidence) similar to those observed in several large calderas. The potential for both rapid and gradual deformation resulting from magma-derived fluids suggests that hydrothermal fluid circulation may help explain deformation episodes at calderas that have not culminated in magmatic eruption.
DIRECTIONAL CULTURAL CHANGE BY MODIFICATION AND REPLACEMENT OF MEMES
Cardoso, Gonçalo C.; Atwell, Jonathan W.
2017-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations—modification of memes—to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. PMID:20722726
Reinterpreting Comorbidity: A Model-Based Approach to Understanding and Classifying Psychopathology
Krueger, Robert F.; Markon, Kristian E.
2008-01-01
Comorbidity has presented a persistent puzzle for psychopathology research. We review recent literature indicating that the puzzle of comorbidity is being solved by research fitting explicit quantitative models to data on comorbidity. We present a meta-analysis of a liability spectrum model of comorbidity, in which specific mental disorders are understood as manifestations of latent liability factors that explain comorbidity by virtue of their impact on multiple disorders. Nosological, structural, etiological, and psychological aspects of this liability spectrum approach to understanding comorbidity are discussed. PMID:17716066
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
Caballero-Lima, David; Kaneva, Iliyana N.; Watton, Simon P.
2013-01-01
In the hyphal tip of Candida albicans we have made detailed quantitative measurements of (i) exocyst components, (ii) Rho1, the regulatory subunit of (1,3)-β-glucan synthase, (iii) Rom2, the specialized guanine-nucleotide exchange factor (GEF) of Rho1, and (iv) actin cortical patches, the sites of endocytosis. We use the resulting data to construct and test a quantitative 3-dimensional model of fungal hyphal growth based on the proposition that vesicles fuse with the hyphal tip at a rate determined by the local density of exocyst components. Enzymes such as (1,3)-β-glucan synthase thus embedded in the plasma membrane continue to synthesize the cell wall until they are removed by endocytosis. The model successfully predicts the shape and dimensions of the hyphae, provided that endocytosis acts to remove cell wall-synthesizing enzymes at the subapical bands of actin patches. Moreover, a key prediction of the model is that the distribution of the synthase is substantially broader than the area occupied by the exocyst. This prediction is borne out by our quantitative measurements. Thus, although the model highlights detailed issues that require further investigation, in general terms the pattern of tip growth of fungal hyphae can be satisfactorily explained by a simple but quantitative model rooted within the known molecular processes of polarized growth. Moreover, the methodology can be readily adapted to model other forms of polarized growth, such as that which occurs in plant pollen tubes. PMID:23666623
Renoult, J P; Thomann, M; Schaefer, H M; Cheptou, P-O
2013-11-01
Even though the importance of selection for trait evolution is well established, we still lack a functional understanding of the mechanisms underlying phenotypic selection. Because animals necessarily use their sensory system to perceive phenotypic traits, the model of sensory bias assumes that sensory systems are the main determinant of signal evolution. Yet, it has remained poorly known how sensory systems contribute to shaping the fitness surface of selected individuals. In a greenhouse experiment, we quantified the strength and direction of selection on floral coloration in a population of cornflowers exposed to bumblebees as unique pollinators during 4 days. We detected significant selection on the chromatic and achromatic (brightness) components of floral coloration. We then studied whether these patterns of selection are explicable by accounting for the visual system of the pollinators. Using data on bumblebee colour vision, we first showed that bumblebees should discriminate among quantitative colour variants. The observed selection was then compared to the selection predicted by psychophysical models of bumblebee colour vision. The achromatic but not the chromatic channel of the bumblebee's visual system could explain the observed pattern of selection. These results highlight that (i) pollinators can select quantitative variation in floral coloration and could thus account for a gradual evolution of flower coloration, and (ii) stimulation of the visual system represents, at least partly, a functional mechanism potentially explaining pollinators' selection on floral colour variants. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.
Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut
NASA Astrophysics Data System (ADS)
Hwa, Terence
The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.
The evolution of climate. [climatic effects of polar wandering and continental drift
NASA Technical Reports Server (NTRS)
Donn, W. L.; Shaw, D.
1975-01-01
A quantitative evaluation is made of the climatic effects of polar wandering plus continental drift in order to determine wether this mechanism alone could explain the deterioration of climate that occurred from the warmth of Mesozoic time to the ice age conditions of the late Cenozoic. By way of procedure, to investigate the effect of the changing geography of the past on climate Adem's thermodynamic model was selected. The application of the model is discussed and preliminary results are given.
de Croon, E M; Blonk, R; de Zwart, B C H; Frings-Dresen, M; Broersen, J
2002-01-01
Objectives: Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. Methods: From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. Results: The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Conclusions: Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work. PMID:12040108
de Croon, E M; Blonk, R W B; de Zwart, B C H; Frings-Dresen, M H W; Broersen, J P J
2002-06-01
Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work.
High upward fluxes of formic acid from a boreal forest canopy
Schobesberger, Siegfried; Lopez-Hilfiker, Felipe D.; Taipale, Ditte; ...
2016-08-14
Eddy covariance fluxes of formic acid, HCOOH, were measured over a boreal forest canopy in spring/summer 2014. The HCOOH fluxes were bidirectional but mostly upward during daytime, in contrast to studies elsewhere that reported mostly downward fluxes. Downward flux episodes were explained well by modeled dry deposition rates. The sum of net observed flux and modeled dry deposition yields an upward “gross flux” of HCOOH, which could not be quantitatively explained by literature estimates of direct vegetative/soil emissions nor by efficient chemical production from other volatile organic compounds, suggesting missing or greatly underestimated HCOOH sources in the boreal ecosystem. Here,more » we implemented a vegetative HCOOH source into the GEOS-Chem chemical transport model to match our derived gross flux and evaluated the updated model against airborne and spaceborne observations. Model biases in the boundary layer were substantially reduced based on this revised treatment, but biases in the free troposphere remain unexplained.« less
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
VLF wave growth and discrete emission triggering in the magnetosphere - A feedback model
NASA Technical Reports Server (NTRS)
Helliwell, R. A.; Inan, U. S.
1982-01-01
A simple nonlinear feedback model is presented to explain VLF wave growth and emission triggering observed in VLF transmission experiments. The model is formulated in terms of the interaction of electrons with a slowly varying wave in an inhomogeneous medium as in an unstable feedback amplifier with a delay line; constant frequency oscillations are generated on the magnetic equator, while risers and fallers are generated on the downstream and upstream sides of the equator, respectively. Quantitative expressions are obtained for the stimulated radiation produced by energy exchanged between energetic electrons and waves by Doppler-shifted cyclotron resonance, and feedback between the stimulated radiation and the phase bunched currents is incorporated in terms of a two-port discrete time model. The resulting model is capable of explaining the observed temporal growth and saturation effects, phase advance, retardation or frequency shift during growth in the context of a single parameter depending on the energetic particle distribution function, as well as pretermination triggering.
Director gliding in a nematic liquid crystal layer: Quantitative comparison with experiments
NASA Astrophysics Data System (ADS)
Mema, E.; Kondic, L.; Cummings, L. J.
2018-03-01
The interaction between nematic liquid crystals and polymer-coated substrates may lead to slow reorientation of the easy axis (so-called "director gliding") when a prolonged external field is applied. We consider the experimental evidence of zenithal gliding observed by Joly et al. [Phys. Rev. E 70, 050701 (2004), 10.1103/PhysRevE.70.050701] and Buluy et al. [J. Soc. Inf. Disp. 14, 603 (2006), 10.1889/1.2235686] as well as azimuthal gliding observed by S. Faetti and P. Marianelli [Liq. Cryst. 33, 327 (2006), 10.1080/02678290500512227], and we present a simple, physically motivated model that captures the slow dynamics of gliding, both in the presence of an electric field and after the electric field is turned off. We make a quantitative comparison of our model results and the experimental data and conclude that our model explains the gliding evolution very well.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Magnitude of the magnetic exchange interaction in the heavy-fermion antiferromagnet CeRhIn 5
Das, Pinaki; Lin, S. -Z.; Ghimire, N. J.; ...
2014-12-08
We have used high-resolution neutron spectroscopy experiments to determine the complete spin wave spectrum of the heavy-fermion antiferromagnet CeRhIn₅. The spin wave dispersion can be quantitatively reproduced with a simple frustrated J₁-J₂ model that also naturally explains the magnetic spin-spiral ground state of CeRhIn₅ and yields a dominant in-plane nearest-neighbor magnetic exchange constant J₀=0.74(3) meV. Our results lead the way to a quantitative understanding of the rich low-temperature phase diagram of the prominent CeTIn₅ (T = Co, Rh, Ir) class of heavy-fermion materials.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
A Transformative Model for Undergraduate Quantitative Biology Education
Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949
Bocedi, Greta; Reid, Jane M
2015-01-01
Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405
Common ecology quantifies human insurgency.
Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F
2009-12-17
Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour.
ERIC Educational Resources Information Center
Gündogdu, Cemal; Aygün, Yalin; Ilkim, Mehmet; Tüfekçi, Sakir
2018-01-01
In this research, quantitative findings and qualitative follow-up themes were used to quantify, conceptualize and finally try to explain the impact of disabled children's engagement with physical activity on their parents' smartphone addiction levels. An initial phase of quantitative investigation was conducted with 116 parents. Analyses of…
The Source of the Symbolic Numerical Distance and Size Effects
Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia
2016-01-01
Human number understanding is thought to rely on the analog number system (ANS), working according to Weber’s law. We propose an alternative account, suggesting that symbolic mathematical knowledge is based on a discrete semantic system (DSS), a representation that stores values in a semantic network, similar to the mental lexicon or to a conceptual network. Here, focusing on the phenomena of numerical distance and size effects in comparison tasks, first we discuss how a DSS model could explain these numerical effects. Second, we demonstrate that the DSS model can give quantitatively as appropriate a description of the effects as the ANS model. Finally, we show that symbolic numerical size effect is mainly influenced by the frequency of the symbols, and not by the ratios of their values. This last result suggests that numerical distance and size effects cannot be caused by the ANS, while the DSS model might be the alternative approach that can explain the frequency-based size effect. PMID:27917139
Qualitative research and the epidemiological imagination: a vital relationship.
Popay, J
2003-01-01
This paper takes as its starting point the assumption that the 'Epidemiological Imagination' has a central role to play in the future development of policies and practice to improve population health and reduce health inequalities within and between states but suggests that by neglecting the contribution that qualitative research can make epidemiology is failing to deliver this potential. The paper briefly considers what qualitative research is, touching on epistemological questions--what type of "knowledge" is generated--and questions of methods--what approaches to data collection, analysis and interpretation are involved). Following this the paper presents two different models of the relationship between qualitative and quantitative research. The enhancement model (which assumes that qualitative research findings add something extra to the findings of quantitative research) suggests three related "roles" for qualitative research: generating hypothesis to be tested by quantitative research, helping to construct more sophisticated measures of social phenomena and explaining unexpected research from quantitative research. In contrast, the Epistemological Model suggests that qualitative research is equal but different from quantitative research making a unique contribution through: researching parts other research approaches can't reach, increasing understanding by adding conceptual and theoretical depth to knowledge, shifting the balance of power between researchers and researched and challenging traditional epidemiological ways of "knowing" the social world. The paper illustrates these different types of contributions with examples of qualitative research and finally discusses ways in which the "trustworthiness" of qualitative research can be assessed.
Adversity magnifies the importance of social information in decision-making.
Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G
2017-11-01
Decision-making theories explain animal behaviour, including human behaviour, as a response to estimations about the environment. In the case of collective behaviour, they have given quantitative predictions of how animals follow the majority option. However, they have so far failed to explain that in some species and contexts social cohesion increases when conditions become more adverse (i.e. individuals choose the majority option with higher probability when the estimated quality of all available options decreases). We have found that this failure is due to modelling simplifications that aided analysis, like low levels of stochasticity or the assumption that only one choice is the correct one. We provide a more general but simple geometric framework to describe optimal or suboptimal decisions in collectives that gives insight into three different mechanisms behind this effect. The three mechanisms have in common that the private information acts as a gain factor to social information: a decrease in the privately estimated quality of all available options increases the impact of social information, even when social information itself remains unchanged. This increase in the importance of social information makes it more likely that agents will follow the majority option. We show that these results quantitatively explain collective behaviour in fish and experiments of social influence in humans. © 2017 The Authors.
Stable Eigenmodes and Energy Dynamics in a Model of LAPD Turbulence
NASA Astrophysics Data System (ADS)
Friedman, Brett; Carter, T. A.; Umansky, M. V.
2011-10-01
A three field Braginskii fluid model that semi-quantitatively predicts turbulent statistics in the Large Plasma Device (LAPD) at UCLA is analyzed. A 3D simulation of turbulence in LAPD using the BOUT++ fluid code is shown to reproduce experimental turbulent properties such as the frequency spectrum and correlation length with semi-qualitative and semi-quantitative accuracy. In an attempt to explain turbulent saturation in the simulation, equations for the energy dynamics are derived and applied to the results. The degree to which stable linear drift wave eigenmodes draw energy from the system and the affect that zonal flows have on transferring energy to stable eigenmode branches is explored. It is also shown that zonal flows drive Kelvin-Helmholtz flute modes, which come to dominate the energy dynamics in the quasi steady state regime.
The fluid mechanics of thrombus formation
NASA Technical Reports Server (NTRS)
1972-01-01
Experimental data are presented for the growth of thrombi (blood clots) in a stagnation point flow of fresh blood. Thrombus shape, size and structure are shown to depend on local flow conditions. The evolution of a thrombus is described in terms of a physical model that includes platelet diffusion, a platelet aggregation mechanism, and diffusion and convection of the chemical species responsible for aggregation. Diffusion-controlled and convection-controlled regimes are defined by flow parameters and thrombus location, and the characteristic growth pattern in each regime is explained. Quantitative comparisons with an approximate theoretical model are presented, and a more general model is formulated.
Coulomb Blockade in a Two-Dimensional Conductive Polymer Monolayer.
Akai-Kasaya, M; Okuaki, Y; Nagano, S; Mitani, T; Kuwahara, Y
2015-11-06
Electronic transport was investigated in poly(3-hexylthiophene-2,5-diyl) monolayers. At low temperatures, nonlinear behavior was observed in the current-voltage characteristics, and a nonzero threshold voltage appeared that increased with decreasing temperature. The current-voltage characteristics could be best fitted using a power law. These results suggest that the nonlinear conductivity can be explained using a Coulomb blockade (CB) mechanism. A model is proposed in which an isotropic extended charge state exists, as predicted by quantum calculations, and percolative charge transport occurs within an array of small conductive islands. Using quantitatively evaluated capacitance values for the islands, this model was found to be capable of explaining the observed experimental data. It is, therefore, suggested that percolative charge transport based on the CB effect is a significant factor giving rise to nonlinear conductivity in organic materials.
The Role of Excitons on Light Amplification in Lead Halide Perovskites.
Lü, Quan; Wei, Haohan; Sun, Wenzhao; Wang, Kaiyang; Gu, Zhiyuan; Li, Jiankai; Liu, Shuai; Xiao, Shumin; Song, Qinghai
2016-12-01
The role of excitons on the amplifications of lead halide perovskites has been explored. Unlike the photoluminescence, the intensity of amplified spontaneous emission is partially suppressed at low temperature. The detailed analysis and experiments show that the inhibition is attributed to the existence of exciton and a quantitative model has been built to explain the experimental observations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
On the velocity distribution of ion jets during substorm recovery
NASA Technical Reports Server (NTRS)
Birn, J.; Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Paschmann, G.
1981-01-01
The velocity distribution of earthward jetting ions that are observed principally during substorm recovery by satellites at approximately 15-35 earth radii in the magnetotail is quantitatively compared with two different theoretical models - the 'adiabatic deformation' of an initially flowing Maxwellian moving into higher magnetic field strength (model A) and the field-aligned electrostatic acceleration of an initially nonflowing isotropic Maxwellian including adiabatic deformation effects (model B). The assumption is made that the ions are protons or, more generally, that they consist of only one species. It is found that both models can explain the often observed concave-convex shape of isodensity contours of the distribution function.
Range and energetics of charge hopping in organic semiconductors
NASA Astrophysics Data System (ADS)
Abdalla, Hassan; Zuo, Guangzheng; Kemerink, Martijn
2017-12-01
The recent upswing in attention for the thermoelectric properties of organic semiconductors (OSCs) adds urgency to the need for a quantitative description of the range and energetics of hopping transport in organic semiconductors under relevant circumstances, i.e., around room temperature (RT). In particular, the degree to which hops beyond the nearest neighbor must be accounted for at RT is still largely unknown. Here, measurements of charge and energy transport in doped OSCs are combined with analytical modeling to reach the univocal conclusion that variable-range hopping is the proper description in a large class of disordered OSC at RT. To obtain quantitative agreement with experiment, one needs to account for the modification of the density of states by ionized dopants. These Coulomb interactions give rise to a deep tail of trap states that is independent of the material's initial energetic disorder. Insertion of this effect into a classical Mott-type variable-range hopping model allows one to give a quantitative description of temperature-dependent conductivity and thermopower measurements on a wide range of disordered OSCs. In particular, the model explains the commonly observed quasiuniversal power-law relation between the Seebeck coefficient and the conductivity.
Ljoså, Cathrine Haugene; Tyssen, Reidar; Lau, Bjørn
2011-11-01
This study aimed to investigate the association between individual and psychosocial work factors and mental distress among offshore shift workers in the Norwegian petroleum industry. All 2406 employees of a large Norwegian oil and gas company, who worked offshore during a two-week period in August 2006, were invited to participate in the web-based survey. Completed questionnaires were received from 1336 employees (56% response rate). The outcome variable was mental distress, assessed with a shortened version of the Hopkins Symptom Checklist (HSCL-5). The following individual factors were adjusted for: age, gender, marital status, and shift work locus of control. Psychosocial work factors included: night work, demands, control and support, and shift work-home interference. The level of mental distress was higher among men than women. In the adjusted regression model, the following were associated with mental distress: (i) high scores on quantitative demands, (ii) low level of support, and (iii) high level of shift work-home interference. Psychosocial work factors explained 76% of the total explained variance (adjusted R (²)=0.21) in the final adjusted model. Psychosocial work factors, such as quantitative demands, support, and shift work-home interference were independently associated with mental distress. Shift schedules were only univariately associated with mental distress.
Directional cultural change by modification and replacement of memes.
Cardoso, Gonçalo C; Atwell, Jonathan W
2011-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations--modification of memes--to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Lagunes Espinoza, Luz Del Carmen; Julier, Bernadette
2013-02-01
Forage quality combines traits related to protein content and energy value. High-quality forages contribute to increase farm autonomy by reducing the use of energy or protein-rich supplements. Genetic analyses in forage legume species are complex because of their tetraploidy and allogamy. Indeed, no genetic studies of quality have been published at the molecular level on these species. Nonetheless, mapping populations of the model species M. truncatula can be used to detect QTL for forage quality. Here, we studied a crossing design involving four connected populations of M. truncatula. Each population was composed of ca. 200 recombinant inbred lines (RIL). We sought population-specific QTL and QTL explaining the whole design variation. We grew parents and RIL in a greenhouse for 2 or 3 seasons and analysed plants for chemical composition of vegetative organs (protein content, digestibility, leaf-to-stem ratio) and stem histology (stem cross-section area, tissue proportions). Over the four populations and all the traits, QTL were found on all chromosomes. Among these QTL, only four genomic regions, on chromosomes 1, 3, 7 and 8, contributed to explaining the variations in the whole crossing design. Surprisingly, we found that quality QTL were located in the same genomic regions as morphological QTL. We thus confirmed the quantitative inheritance of quality traits and tight relationships between quality and morphology. Our findings could be explained by a co-location of genes involved in quality and morphology. This study will help to detect candidate genes involved in quantitative variation for quality in forage legume species.
An Analytical Diffusion–Expansion Model for Forbush Decreases Caused by Flux Ropes
NASA Astrophysics Data System (ADS)
Dumbović, Mateja; Heber, Bernd; Vršnak, Bojan; Temmer, Manuela; Kirin, Anamarija
2018-06-01
We present an analytical diffusion–expansion Forbush decrease (FD) model ForbMod, which is based on the widely used approach of an initially empty, closed magnetic structure (i.e., flux rope) that fills up slowly with particles by perpendicular diffusion. The model is restricted to explaining only the depression caused by the magnetic structure of the interplanetary coronal mass ejection (ICME). We use remote CME observations and a 3D reconstruction method (the graduated cylindrical shell method) to constrain initial boundary conditions of the FD model and take into account CME evolutionary properties by incorporating flux rope expansion. Several flux rope expansion modes are considered, which can lead to different FD characteristics. In general, the model is qualitatively in agreement with observations, whereas quantitative agreement depends on the diffusion coefficient and the expansion properties (interplay of the diffusion and expansion). A case study was performed to explain the FD observed on 2014 May 30. The observed FD was fitted quite well by ForbMod for all expansion modes using only the diffusion coefficient as a free parameter, where the diffusion parameter was found to correspond to an expected range of values. Our study shows that, in general, the model is able to explain the global properties of an FD caused by a flux rope and can thus be used to help understand the underlying physics in case studies.
Qualitative versus quantitative methods in psychiatric research.
Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S
2012-01-01
Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.
A quantitative study of the benefits of co-regulation using the spoIIA operon as an example
Iber, Dagmar
2006-01-01
The distribution of most genes is not random, and functionally linked genes are often found in clusters. Several theories have been put forward to explain the emergence and persistence of operons in bacteria. Careful analysis of genomic data favours the co-regulation model, where gene organization into operons is driven by the benefits of coordinated gene expression and regulation. Direct evidence that coexpression increases the individual's fitness enough to ensure operon formation and maintenance is, however, still lacking. Here, a previously described quantitative model of the network that controls the transcription factor σF during sporulation in Bacillus subtilis is employed to quantify the benefits arising from both organization of the sporulation genes into the spoIIA operon and from translational coupling. The analysis shows that operon organization, together with translational coupling, is important because of the inherent stochastic nature of gene expression, which skews the ratios between protein concentrations in the absence of co-regulation. The predicted impact of different forms of gene regulation on fitness and survival agrees quantitatively with published sporulation efficiencies. PMID:16924264
A quantitative study of the benefits of co-regulation using the spoIIA operon as an example.
Iber, Dagmar
2006-01-01
The distribution of most genes is not random, and functionally linked genes are often found in clusters. Several theories have been put forward to explain the emergence and persistence of operons in bacteria. Careful analysis of genomic data favours the co-regulation model, where gene organization into operons is driven by the benefits of coordinated gene expression and regulation. Direct evidence that coexpression increases the individual's fitness enough to ensure operon formation and maintenance is, however, still lacking. Here, a previously described quantitative model of the network that controls the transcription factor sigma(F) during sporulation in Bacillus subtilis is employed to quantify the benefits arising from both organization of the sporulation genes into the spoIIA operon and from translational coupling. The analysis shows that operon organization, together with translational coupling, is important because of the inherent stochastic nature of gene expression, which skews the ratios between protein concentrations in the absence of co-regulation. The predicted impact of different forms of gene regulation on fitness and survival agrees quantitatively with published sporulation efficiencies.
Fundamentals and Recent Developments in Approximate Bayesian Computation
Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka
2017-01-01
Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922
Quantitative transmission Raman spectroscopy of pharmaceutical tablets and capsules.
Johansson, Jonas; Sparén, Anders; Svensson, Olof; Folestad, Staffan; Claybourn, Mike
2007-11-01
Quantitative analysis of pharmaceutical formulations using the new approach of transmission Raman spectroscopy has been investigated. For comparison, measurements were also made in conventional backscatter mode. The experimental setup consisted of a Raman probe-based spectrometer with 785 nm excitation for measurements in backscatter mode. In transmission mode the same system was used to detect the Raman scattered light, while an external diode laser of the same type was used as excitation source. Quantitative partial least squares models were developed for both measurement modes. The results for tablets show that the prediction error for an independent test set was lower for the transmission measurements with a relative root mean square error of about 2.2% as compared with 2.9% for the backscatter mode. Furthermore, the models were simpler in the transmission case, for which only a single partial least squares (PLS) component was required to explain the variation. The main reason for the improvement using the transmission mode is a more representative sampling of the tablets compared with the backscatter mode. Capsules containing mixtures of pharmaceutical powders were also assessed by transmission only. The quantitative results for the capsules' contents were good, with a prediction error of 3.6% w/w for an independent test set. The advantage of transmission Raman over backscatter Raman spectroscopy has been demonstrated for quantitative analysis of pharmaceutical formulations, and the prospects for reliable, lean calibrations for pharmaceutical analysis is discussed.
Contextual Advantage for State Discrimination
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.
2018-02-01
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
BMP4 density gradient in disk-shaped confinement
NASA Astrophysics Data System (ADS)
Bozorgui, Behnaz; Teimouri, Hamid; Kolomeisky, Anatoly B.
We present a quantitative model that explains the scaling of BMP4 gradients during gastrulation and the recent experimental observation that geometric confinement of human embryonic stem cells is sufficient to recapitulate much of germ layer patterning. Based on a assumption that BMP4 diffusion rate is much smaller than the diffusion rate of it's inhibitor molecules, our results confirm that the length-scale which defines germ layer territories does not depend on system size.
Influence of field dependent critical current density on flux profiles in high Tc superconductors
NASA Technical Reports Server (NTRS)
Takacs, S.
1990-01-01
The field distribution for superconducting cylinders and slabs with field dependent critical current densities in combined DC and AC magnetic fields and the corresponding magnetic fluxes are calculated. It is shown that all features of experimental magnetic-field profile measurements can be explained in the framework of field dependent critical current density. Even the quantitative agreement between the experimental and theoretical results using Kim's model is very good.
Champagne flutes and brandy snifters: modelling protostellar outflow-cloud chemical interfaces
NASA Astrophysics Data System (ADS)
Rollins, R. P.; Rawlings, J. M. C.; Williams, D. A.; Redman, M. P.
2014-10-01
A rich variety of molecular species has now been observed towards hot cores in star-forming regions and in the interstellar medium. An increasing body of evidence from millimetre interferometers suggests that many of these form at the interfaces between protostellar outflows and their natal molecular clouds. However, current models have remained unable to explain the origin of the observational bias towards wide-angled `brandy snifter' shaped outflows over narrower `champagne flute' shapes in carbon monoxide imaging. Furthermore, these wide-angled systems exhibit unusually high abundances of the molecular ion HCO+. We present results from a chemodynamic model of such regions where a rich chemistry arises naturally as a result of turbulent mixing between cold, dense molecular gas and the hot, ionized outflow material. The injecta drives a rich and rapid ion-neutral chemistry in qualitative and quantitative agreement with the observations. The observational bias towards wide-angled outflows is explained naturally by the geometry-dependent ion injection rate causing rapid dissociation of CO in the younger systems.
Wilts, Bodo D.; Michielsen, Kristel; De Raedt, Hans; Stavenga, Doekele G.
2014-01-01
Birds-of-paradise are nature’s prime examples of the evolution of color by sexual selection. Their brilliant, structurally colored feathers play a principal role in mating displays. The structural coloration of both the occipital and breast feathers of the bird-of-paradise Lawes’ parotia is produced by melanin rodlets arranged in layers, together acting as interference reflectors. Light reflection by the silvery colored occipital feathers is unidirectional as in a classical multilayer, but the reflection by the richly colored breast feathers is three-directional and extraordinarily complex. Here we show that the reflection properties of both feather types can be quantitatively explained by finite-difference time-domain modeling using realistic feather anatomies and experimentally determined refractive index dispersion values of keratin and melanin. The results elucidate the interplay between avian coloration and vision and indicate tuning of the mating displays to the spectral properties of the avian visual system. PMID:24591592
Wilts, Bodo D; Michielsen, Kristel; De Raedt, Hans; Stavenga, Doekele G
2014-03-25
Birds-of-paradise are nature's prime examples of the evolution of color by sexual selection. Their brilliant, structurally colored feathers play a principal role in mating displays. The structural coloration of both the occipital and breast feathers of the bird-of-paradise Lawes' parotia is produced by melanin rodlets arranged in layers, together acting as interference reflectors. Light reflection by the silvery colored occipital feathers is unidirectional as in a classical multilayer, but the reflection by the richly colored breast feathers is three-directional and extraordinarily complex. Here we show that the reflection properties of both feather types can be quantitatively explained by finite-difference time-domain modeling using realistic feather anatomies and experimentally determined refractive index dispersion values of keratin and melanin. The results elucidate the interplay between avian coloration and vision and indicate tuning of the mating displays to the spectral properties of the avian visual system.
Hall effect analysis in irradiated silicon samples with different resistivities
NASA Astrophysics Data System (ADS)
Borchi, E.; Bruzzi, M.; Dezillie, B.; Lazanu, S.; Li, Z.; Pirollo, S.
1999-08-01
The changes induced by neutron irradiation in n- and p-type silicon samples with starting resistivities from 10 /spl Omega/-cm up to 30 k/spl Omega/-cm, grown using different techniques, as float-zone (FZ), Czochralski (CZ) and epitaxial, have been analyzed by Van der Pauw and Hall effect measurements. Increasing the fluence, each set of samples evolves toward a quasi-intrinsic p-type material. This behavior has been explained in the frame of a two-level model, that considers the introduction during irradiation of mainly two defects. A deep acceptor and a deep donor, probably related to the divacancy and to the C/sub i/O/sub i/ complex, are placed in the upper and lower half of the forbidden gap, respectively. This simple model explains quantitatively the data on resistivity and Hall coefficient of each set of samples up to the fluence of /spl ap/10/sup 14/ n/cm/sup 2/.
Stenberg, Nicola; Furness, Penny J
2017-03-01
The outcomes of self-management interventions are commonly assessed using quantitative measurement tools, and few studies ask people with long-term conditions to explain, in their own words, what aspects of the intervention they valued. In this Grounded Theory study, a Health Trainers service in the north of England was evaluated based on interviews with eight service-users. Open, focused, and theoretical coding led to the development of a preliminary model explaining participants' experiences and perceived impact of the service. The model reflects the findings that living well with a long-term condition encompassed social connectedness, changed identities, acceptance, and self-care. Health trainers performed four related roles that were perceived to contribute to these outcomes: conceptualizer, connector, coach, and champion. The evaluation contributes a grounded theoretical understanding of a personalized self-management intervention that emphasizes the benefits of a holistic approach to enable cognitive, behavioral, emotional, and social adjustments.
Overbias light emission due to higher-order quantum noise in a tunnel junction.
Xu, F; Holmqvist, C; Belzig, W
2014-08-08
Understanding tunneling from an atomically sharp tip to a metallic surface requires us to account for interactions on a nanoscopic scale. Inelastic tunneling of electrons generates emission of photons, whose energies intuitively should be limited by the applied bias voltage. However, experiments [G. Schull et al., Phys. Rev. Lett. 102, 057401 (2009) indicate that more complex processes involving the interaction of electrons with plasmon polaritons lead to photon emission characterized by overbias energies. We propose a model of this observation in analogy to the dynamical Coulomb blockade, originally developed for treating the electronic environment in mesoscopic circuits. We explain the experimental finding quantitatively by the correlated tunneling of two electrons interacting with a LRC circuit modeling the local plasmon-polariton mode. To explain the overbias emission, the non-Gaussian statistics of the tunneling dynamics of the electrons is essential.
Eĭdel'man, Iu A; Slanina, S V; Sal'nikov, I V; Andreev, S G
2012-12-01
The knowledge of radiation-induced chromosomal aberration (CA) mechanisms is required in many fields of radiation genetics, radiation biology, biodosimetry, etc. However, these mechanisms are yet to be quantitatively characterised. One of the reasons is that the relationships between primary lesions of DNA/chromatin/chromosomes and dose-response curves for CA are unknown because the pathways of lesion interactions in an interphase nucleus are currently inaccessible for direct experimental observation. This article aims for the comparative analysis of two principally different scenarios of formation of simple and complex interchromosomal exchange aberrations: by lesion interactions at chromosome territories' surface vs. in the whole space of the nucleus. The analysis was based on quantitative mechanistic modelling of different levels of structures and processes involved in CA formation: chromosome structure in an interphase nucleus, induction, repair and interactions of DNA lesions. It was shown that the restricted diffusion of chromosomal loci, predicted by computational modelling of chromosome organization, results in lesion interactions in the whole space of the nucleus being impossible. At the same time, predicted features of subchromosomal dynamics agrees well with in vivo observations and does not contradict the mechanism of CA formation at the surface of chromosome territories. On the other hand, the "surface mechanism" of CA formation, despite having certain qualities, proved to be insufficient to explain high frequency of complex exchange aberrations observed by mFISH technique. The alternative mechanism, CA formation on nuclear centres is expected to be sufficient to explain frequent complex exchanges.
The Role of Crustal Strength in Controlling Magmatism and Melt Chemistry During Rifting and Breakup
NASA Astrophysics Data System (ADS)
Armitage, John J.; Petersen, Kenni D.; Pérez-Gussinyé, Marta
2018-02-01
The strength of the crust has a strong impact on the evolution of continental extension and breakup. Strong crust may promote focused narrow rifting, while wide rifting might be due to a weaker crustal architecture. The strength of the crust also influences deeper processes within the asthenosphere. To quantitatively test the implications of crustal strength on the evolution of continental rift zones, we developed a 2-D numerical model of lithosphere extension that can predict the rare Earth element (REE) chemistry of erupted lava. We find that a difference in crustal strength leads to a different rate of depletion in light elements relative to heavy elements. By comparing the model predictions to rock samples from the Basin and Range, USA, we can demonstrate that slow extension of a weak continental crust can explain the observed depletion in melt chemistry. The same comparison for the Main Ethiopian Rift suggests that magmatism within this narrow rift zone can be explained by the localization of strain caused by a strong lower crust. We demonstrate that the slow extension of a strong lower crust above a mantle of potential temperature of 1,350 °C can fit the observed REE trends and the upper mantle seismic velocity for the Main Ethiopian Rift. The thermo-mechanical model implies that melt composition could provide quantitative information on the style of breakup and the initial strength of the continental crust.
Kessner, Darren; Novembre, John
2015-01-01
Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748
Modelling the Wind-Borne Spread of Highly Pathogenic Avian Influenza Virus between Farms
Ssematimba, Amos; Hagenaars, Thomas J.; de Jong, Mart C. M.
2012-01-01
A quantitative understanding of the spread of contaminated farm dust between locations is a prerequisite for obtaining much-needed insight into one of the possible mechanisms of disease spread between farms. Here, we develop a model to calculate the quantity of contaminated farm-dust particles deposited at various locations downwind of a source farm and apply the model to assess the possible contribution of the wind-borne route to the transmission of Highly Pathogenic Avian Influenza virus (HPAI) during the 2003 epidemic in the Netherlands. The model is obtained from a Gaussian Plume Model by incorporating the dust deposition process, pathogen decay, and a model for the infection process on exposed farms. Using poultry- and avian influenza-specific parameter values we calculate the distance-dependent probability of between-farm transmission by this route. A comparison between the transmission risk pattern predicted by the model and the pattern observed during the 2003 epidemic reveals that the wind-borne route alone is insufficient to explain the observations although it could contribute substantially to the spread over short distance ranges, for example, explaining 24% of the transmission over distances up to 25 km. PMID:22348042
Modelling the wind-borne spread of highly pathogenic avian influenza virus between farms.
Ssematimba, Amos; Hagenaars, Thomas J; de Jong, Mart C M
2012-01-01
A quantitative understanding of the spread of contaminated farm dust between locations is a prerequisite for obtaining much-needed insight into one of the possible mechanisms of disease spread between farms. Here, we develop a model to calculate the quantity of contaminated farm-dust particles deposited at various locations downwind of a source farm and apply the model to assess the possible contribution of the wind-borne route to the transmission of Highly Pathogenic Avian Influenza virus (HPAI) during the 2003 epidemic in the Netherlands. The model is obtained from a Gaussian Plume Model by incorporating the dust deposition process, pathogen decay, and a model for the infection process on exposed farms. Using poultry- and avian influenza-specific parameter values we calculate the distance-dependent probability of between-farm transmission by this route. A comparison between the transmission risk pattern predicted by the model and the pattern observed during the 2003 epidemic reveals that the wind-borne route alone is insufficient to explain the observations although it could contribute substantially to the spread over short distance ranges, for example, explaining 24% of the transmission over distances up to 25 km.
Spontaneous Focusing on Quantitative Relations in the Development of Children's Fraction Knowledge
ERIC Educational Resources Information Center
McMullen, Jake; Hannula-Sormunen, Minna M.; Lehtinen, Erno
2014-01-01
While preschool-aged children display some skills with quantitative relations, later learning of related fraction concepts is difficult for many students. We present two studies that investigate young children's tendency of Spontaneous Focusing On quantitative Relations (SFOR), which may help explain individual differences in the development of…
NASA Astrophysics Data System (ADS)
Trimoreau, E.; Archambault, B.; Brind'Amour, A.; Lepage, M.; Guitton, J.; Le Pape, O.
2013-11-01
Essential fish habitat suitability (EFHS) models and geographic information system (GIS) were combined to describe nursery habitats for three flatfish species (Solea solea, Pleuronectes platessa, Dicologlossa cuneata) in the Bay of Biscay (Western Europe), using physical parameters known or suspected to influence juvenile flatfish spatial distribution and density (i.e. bathymetry, sediment, estuarine influence and wave exposure). The effects of habitat features on juvenile distribution were first calculated from EFHS models, used to identify the habitats in which juvenile are concentrated. The EFHS model for S. solea confirmed previous findings regarding its preference for shallow soft bottom areas and provided new insights relating to the significant effect of wave exposure on nursery habitat suitability. The two other models extended these conclusions with some discrepancies among species related to their respective niches. Using a GIS, quantitative density maps were produced from EFHS models predictions. The respective areas of the different habitats were determined and their relative contributions (density × area) to the total amount of juveniles were calculated at the scale of stock management, in the Bay of Biscay. Shallow and muddy areas contributed to 70% of total juvenile relative abundance whereas only representing 16% of the coastal area, suggesting that they should be considered as essential habitats for these three flatfish species. For S. solea and P. platessa, wave exposure explained the propensity for sheltered areas, where concentration of juveniles was higher. Distribution maps of P. platessa and D. cuneata juveniles also revealed opposite spatial and temporal trends which were explained by the respective biogeographical distributions of these two species, close to their southern and northern limit respectively, and by their responses to hydroclimatic trends.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn; Jin, Ke
2016-04-14
Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and themore » approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.« less
Mechanical behavior in living cells consistent with the tensegrity model
NASA Technical Reports Server (NTRS)
Wang, N.; Naruse, K.; Stamenovic, D.; Fredberg, J. J.; Mijailovich, S. M.; Tolic-Norrelykke, I. M.; Polte, T.; Mannix, R.; Ingber, D. E.
2001-01-01
Alternative models of cell mechanics depict the living cell as a simple mechanical continuum, porous filament gel, tensed cortical membrane, or tensegrity network that maintains a stabilizing prestress through incorporation of discrete structural elements that bear compression. Real-time microscopic analysis of cells containing GFP-labeled microtubules and associated mitochondria revealed that living cells behave like discrete structures composed of an interconnected network of actin microfilaments and microtubules when mechanical stresses are applied to cell surface integrin receptors. Quantitation of cell tractional forces and cellular prestress by using traction force microscopy confirmed that microtubules bear compression and are responsible for a significant portion of the cytoskeletal prestress that determines cell shape stability under conditions in which myosin light chain phosphorylation and intracellular calcium remained unchanged. Quantitative measurements of both static and dynamic mechanical behaviors in cells also were consistent with specific a priori predictions of the tensegrity model. These findings suggest that tensegrity represents a unified model of cell mechanics that may help to explain how mechanical behaviors emerge through collective interactions among different cytoskeletal filaments and extracellular adhesions in living cells.
Sariaslan, A; Larsson, H; Fazel, S
2016-09-01
Patients diagnosed with psychotic disorders (for example, schizophrenia and bipolar disorder) have elevated risks of committing violent acts, particularly if they are comorbid with substance misuse. Despite recent insights from quantitative and molecular genetic studies demonstrating considerable pleiotropy in the genetic architecture of these phenotypes, there is currently a lack of large-scale studies that have specifically examined the aetiological links between psychotic disorders and violence. Using a sample of all Swedish individuals born between 1958 and 1989 (n=3 332 101), we identified a total of 923 259 twin-sibling pairs. Patients were identified using the National Patient Register using validated algorithms based on International Classification of Diseases (ICD) 8-10. Univariate quantitative genetic models revealed that all phenotypes (schizophrenia, bipolar disorder, substance misuse, and violent crime) were highly heritable (h(2)=53-71%). Multivariate models further revealed that schizophrenia was a stronger predictor of violence (r=0.32; 95% confidence interval: 0.30-0.33) than bipolar disorder (r=0.23; 0.21-0.25), and large proportions (51-67%) of these phenotypic correlations were explained by genetic factors shared between each disorder, substance misuse, and violence. Importantly, we found that genetic influences that were unrelated to substance misuse explained approximately a fifth (21%; 20-22%) of the correlation with violent criminality in bipolar disorder but none of the same correlation in schizophrenia (Pbipolar disorder<0.001; Pschizophrenia=0.55). These findings highlight the problems of not disentangling common and unique sources of covariance across genetically similar phenotypes as the latter sources may include aetiologically important clues. Clinically, these findings underline the importance of assessing risk of different phenotypes together and integrating interventions for psychiatric disorders, substance misuse, and violence.
Quantitative analysis of the flexibility effect of cisplatin on circular DNA
NASA Astrophysics Data System (ADS)
Ji, Chao; Zhang, Lingyun; Wang, Peng-Ye
2013-10-01
We study the effects of cisplatin on the circular configuration of DNA using atomic force microscopy (AFM) and observe that the DNA gradually transforms to a complex configuration with an intersection and interwound structures from a circlelike structure. An algorithm is developed to extract the configuration profiles of circular DNA from AFM images and the radius of gyration is used to describe the flexibility of circular DNA. The quantitative analysis of the circular DNA demonstrates that the radius of gyration gradually decreases and two processes on the change of flexibility of circular DNA are found as the cisplatin concentration increases. Furthermore, a model is proposed and discussed to explain the mechanism for understanding the complicated interaction between DNA and cisplatin.
Simulating the Past, Present and Future of the Upper Troposphere and Lower Stratosphere
NASA Astrophysics Data System (ADS)
Gettelman, Andrew; Hegglin, Michaela
2010-05-01
A comprehensive assessment of coupled chemistry climate model (CCM) performance in the upper troposphere and lower stratosphere has been conducted with 18 models. Both qualitative and quantitative comparisons of model representation of UTLS dynamical, radiative and chemical structure have been conducted, using a collection of quantitative grading techniques. The models are able to reproduce the observed climatology of dynamical, radiative and chemical structure in the tropical and extratropical UTLS, despite relatively coarse vertical and horizontal resolution. Diagnostics of the Tropical Tropopause Layer (TTL), Tropopause Inversion Layer (TIL) and Extra-tropical Transition Layer (ExTL) are analyzed. The results provide new insight into the key processes that govern the dynamics and transport in the tropics and extra-tropicsa. The presentation will explain how models are able to reproduce key features of the UTLS, what features they do not reproduce, and why. Model trends over the historical period are also assessed and interannual variability is included in the metrics. Finally, key trends in the UTLS for the future with a given halogen and greenhouse gas scenario are presented, indicating significant changes in tropopause height and temperature, as well as UTLS ozone concentrations in the 21st century due to climate change and ozone recovery.
Atomic Scale Structure of (001) Hydrogen-Induced Platelets in Germanium
NASA Astrophysics Data System (ADS)
David, Marie-Laure; Pizzagalli, Laurent; Pailloux, Fréderic; Barbot, Jean François
2009-04-01
An accurate characterization of the structure of hydrogen-induced platelets is a prerequisite for investigating both hydrogen aggregation and formation of larger defects. On the basis of quantitative high resolution transmission electron microscopy experiments combined with extensive first principles calculations, we present a model for the atomic structure of (001) hydrogen-induced platelets in germanium. It involves broken Ge-Ge bonds in the [001] direction that are dihydride passivated, vacancies, and trapped H2 molecules, showing that the species involved in platelet formation depend on the habit plane. This model explains all previous experimental observations.
Cascade model for fluvial geomorphology
NASA Technical Reports Server (NTRS)
Newman, W. I.; Turcotte, D. L.
1990-01-01
Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.
A simple microstructure return model explaining microstructure noise and Epps effects
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2014-01-01
We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).
Quantitative Simulation of QARBM Challenge Events During Radiation Belt Enhancements
NASA Astrophysics Data System (ADS)
Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Chu, X.
2017-12-01
Various physical processes are known to affect energetic electron dynamics in the Earth's radiation belts, but their quantitative effects at different times and locations in space need further investigation. This presentation focuses on discussing the quantitative roles of various physical processes that affect Earth's radiation belt electron dynamics during radiation belt enhancement challenge events (storm-time vs. non-storm-time) selected by the GEM Quantitative Assessment of Radiation Belt Modeling (QARBM) focus group. We construct realistic global distributions of whistler-mode chorus waves, adopt various versions of radial diffusion models (statistical and event-specific), and use the global evolution of other potentially important plasma waves including plasmaspheric hiss, magnetosonic waves, and electromagnetic ion cyclotron waves from all available multi-satellite measurements. These state-of-the-art wave properties and distributions on a global scale are used to calculate diffusion coefficients, that are then adopted as inputs to simulate the dynamical electron evolution using a 3D diffusion simulation during the storm-time and the non-storm-time acceleration events respectively. We explore the similarities and differences in the dominant physical processes that cause radiation belt electron dynamics during the storm-time and non-storm-time acceleration events. The quantitative role of each physical process is determined by comparing against the Van Allen Probes electron observations at different energies, pitch angles, and L-MLT regions. This quantitative comparison further indicates instances when quasilinear theory is sufficient to explain the observed electron dynamics or when nonlinear interaction is required to reproduce the energetic electron evolution observed by the Van Allen Probes.
Wu, Yi-Hsuan; Hu, Chia-Wei; Chien, Chih-Wei; Chen, Yu-Ju; Huang, Hsuan-Cheng; Juan, Hsueh-Fen
2013-01-01
ATP synthase is present on the plasma membrane of several types of cancer cells. Citreoviridin, an ATP synthase inhibitor, selectively suppresses the proliferation and growth of lung cancer without affecting normal cells. However, the global effects of targeting ectopic ATP synthase in vivo have not been well defined. In this study, we performed quantitative proteomic analysis using isobaric tags for relative and absolute quantitation (iTRAQ) and provided a comprehensive insight into the complicated regulation by citreoviridin in a lung cancer xenograft model. With high reproducibility of the quantitation, we obtained quantitative proteomic profiling with 2,659 proteins identified. Bioinformatics analysis of the 141 differentially expressed proteins selected by their relative abundance revealed that citreoviridin induces alterations in the expression of glucose metabolism-related enzymes in lung cancer. The up-regulation of enzymes involved in gluconeogenesis and storage of glucose indicated that citreoviridin may reduce the glycolytic intermediates for macromolecule synthesis and inhibit cell proliferation. Using comprehensive proteomics, the results identify metabolic aspects that help explain the antitumorigenic effect of citreoviridin in lung cancer, which may lead to a better understanding of the links between metabolism and tumorigenesis in cancer therapy.
Wu, Yi-Hsuan; Hu, Chia-Wei; Chien, Chih-Wei; Chen, Yu-Ju; Huang, Hsuan-Cheng; Juan, Hsueh-Fen
2013-01-01
ATP synthase is present on the plasma membrane of several types of cancer cells. Citreoviridin, an ATP synthase inhibitor, selectively suppresses the proliferation and growth of lung cancer without affecting normal cells. However, the global effects of targeting ectopic ATP synthase in vivo have not been well defined. In this study, we performed quantitative proteomic analysis using isobaric tags for relative and absolute quantitation (iTRAQ) and provided a comprehensive insight into the complicated regulation by citreoviridin in a lung cancer xenograft model. With high reproducibility of the quantitation, we obtained quantitative proteomic profiling with 2,659 proteins identified. Bioinformatics analysis of the 141 differentially expressed proteins selected by their relative abundance revealed that citreoviridin induces alterations in the expression of glucose metabolism-related enzymes in lung cancer. The up-regulation of enzymes involved in gluconeogenesis and storage of glucose indicated that citreoviridin may reduce the glycolytic intermediates for macromolecule synthesis and inhibit cell proliferation. Using comprehensive proteomics, the results identify metabolic aspects that help explain the antitumorigenic effect of citreoviridin in lung cancer, which may lead to a better understanding of the links between metabolism and tumorigenesis in cancer therapy. PMID:23990911
A comparison of major petroleum life cycle models | Science ...
Many organizations have attempted to develop an accurate well-to-pump life cycle model of petroleum products in order to inform decision makers of the consequences of its use. Our paper studies five of these models, demonstrating the differences in their predictions and attempting to evaluate their data quality. Carbon dioxide well-to-pump emissions for gasoline showed a variation of 35 %, and other pollutants such as ammonia and particulate matter varied up to 100 %. Differences in allocation do not appear to explain differences in predictions. Effects of these deviations on well-to-wheels passenger vehicle and truck transportation life cycle models may be minimal for effects such as global warming potential (6 % spread), but for respiratory effects of criteria pollutants (41 % spread) and other impact categories, they can be significant. A data quality assessment of the models’ documentation revealed real differences between models in temporal and geographic representativeness, completeness, as well as transparency. Stakeholders may need to consider carefully the tradeoffs inherent when selecting a model to conduct life cycle assessments for systems that make heavy use of petroleum products. This is a qualitative and quantitative comparison of petroleum LCA models intended for an expert audience interested in better understanding the data quality of existing petroleum life cycle models and the quantitative differences between these models.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Montaño, Daniel E; Tshimanga, Mufuta; Hamilton, Deven T; Gorn, Gerald; Kasprzyk, Danuta
2018-02-01
Slow adult male circumcision uptake is one factor leading some to recommend increased priority for infant male circumcision (IMC) in sub-Saharan African countries. This research, guided by the integrated behavioral model (IBM), was carried out to identify key beliefs that best explain Zimbabwean parents' motivation to have their infant sons circumcised. A quantitative survey, designed from qualitative elicitation study results, was administered to independent representative samples of 800 expectant mothers and 795 expectant fathers in two urban and two rural areas in Zimbabwe. Multiple regression analyses found IMC motivation among fathers was explained by instrumental attitude, descriptive norm and self-efficacy; while motivation among mothers was explained by instrumental attitude, injunctive norm, descriptive norm, self-efficacy, and perceived control. Regression analyses of beliefs underlying IBM constructs found some overlap but many differences in key beliefs explaining IMC motivation among mothers and fathers. We found differences in key beliefs among urban and rural parents. Urban fathers' IMC motivation was explained best by behavioral beliefs, while rural fathers' motivation was explained by both behavioral and efficacy beliefs. Urban mothers' IMC motivation was explained primarily by behavioral and normative beliefs, while rural mothers' motivation was explained mostly by behavioral beliefs. The key beliefs we identified should serve as targets for developing messages to improve demand and maximize parent uptake as IMC programs are rolled out. These targets need to be different among urban and rural expectant mothers and fathers.
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
Genetic interactions contribute less than additive effects to quantitative trait variation in yeast
Bloom, Joshua S.; Kotenko, Iulia; Sadhu, Meru J.; Treusch, Sebastian; Albert, Frank W.; Kruglyak, Leonid
2015-01-01
Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTL–QTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL–QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies. PMID:26537231
A strategy for understanding noise-induced annoyance
NASA Astrophysics Data System (ADS)
Fidell, S.; Green, D. M.; Schultz, T. J.; Pearsons, K. S.
1988-08-01
This report provides a rationale for development of a systematic approach to understanding noise-induced annoyance. Two quantitative models are developed to explain: (1) the prevalence of annoyance due to residential exposure to community noise sources; and (2) the intrusiveness of individual noise events. Both models deal explicitly with the probabilistic nature of annoyance, and assign clear roles to acoustic and nonacoustic determinants of annoyance. The former model provides a theoretical foundation for empirical dosage-effect relationships between noise exposure and community response, while the latter model differentiates between the direct and immediate annoyance of noise intrusions and response bias factors that influence the reporting of annoyance. The assumptions of both models are identified, and the nature of the experimentation necessary to test hypotheses derived from the models is described.
CP violation in multibody B decays from QCD factorization
NASA Astrophysics Data System (ADS)
Klein, Rebecca; Mannel, Thomas; Virto, Javier; Vos, K. Keri
2017-10-01
We test a data-driven approach based on QCD factorization for charmless three-body B-decays by confronting it to measurements of CP violation in B - → π - π + π -. While some of the needed non-perturbative objects can be directly extracted from data, some others can, so far, only be modelled. Although this approach is currently model dependent, we comment on the perspectives to reduce this model dependence. While our model naturally accommodates the gross features of the Dalitz distribution, it cannot quantitatively explain the details seen in the current experimental data on local CP asymmetries. We comment on possible refinements of our simple model and conclude by briefly discussing a possible extension of the model to large invariant masses, where large local CP asymmetries have been measured.
NASA Astrophysics Data System (ADS)
Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.
2002-02-01
We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.
Study on the Carbonation Behavior of Cement Mortar by Electrochemical Impedance Spectroscopy
Dong, Biqin; Qiu, Qiwen; Xiang, Jiaqi; Huang, Canjie; Xing, Feng; Han, Ningxu
2014-01-01
A new electrochemical model has been carefully established to explain the carbonation behavior of cement mortar, and the model has been validated by the experimental results. In fact, it is shown by this study that the electrochemical impedance behavior of mortars varies in the process of carbonation. With the cement/sand ratio reduced, the carbonation rate reveals more remarkable. The carbonation process can be quantitatively accessed by a parameter, which can be obtained by means of the electrochemical impedance spectroscopy (EIS)-based electrochemical model. It has been found that the parameter is a function of carbonation depth and of carbonation time. Thereby, prediction of carbonation depth can be achieved. PMID:28788452
Study on the Carbonation Behavior of Cement Mortar by Electrochemical Impedance Spectroscopy.
Dong, Biqin; Qiu, Qiwen; Xiang, Jiaqi; Huang, Canjie; Xing, Feng; Han, Ningxu
2014-01-03
A new electrochemical model has been carefully established to explain the carbonation behavior of cement mortar, and the model has been validated by the experimental results. In fact, it is shown by this study that the electrochemical impedance behavior of mortars varies in the process of carbonation. With the cement/sand ratio reduced, the carbonation rate reveals more remarkable. The carbonation process can be quantitatively accessed by a parameter, which can be obtained by means of the electrochemical impedance spectroscopy (EIS)-based electrochemical model. It has been found that the parameter is a function of carbonation depth and of carbonation time. Thereby, prediction of carbonation depth can be achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruterana, Pierre, E-mail: pierre.ruterana@ensicaen.fr; Wang, Yi, E-mail: pierre.ruterana@ensicaen.fr; Chen, Jun, E-mail: pierre.ruterana@ensicaen.fr
A detailed investigation on the misfit and threading dislocations at GaSb/GaAs interface has been carried out using molecular dynamics simulation and quantitative electron microscopy techniques. The sources and propagation of misfit dislocations have been elucidated. The nature and formation mechanisms of the misfit dislocations as well as the role of Sb on the stability of the Lomer configuration have been explained.
Hu, Li; Huang, Yingzhou; Pan, Lujun; Fang, Yurui
2017-09-11
Plasmonic chirality represents significant potential for novel nanooptical devices due to its association with strong chiroptical responses. Previous reports on plasmonic chirality mechanism mainly focus on phase retardation and coupling. In this paper, we propose a model similar to the chiral molecules for explaining the intrinsic plasmonic chirality mechanism of varies 3D chiral structures quantitatively based on the interplay and mixing of electric and magnetic dipole modes (directly from electromagnetic field numerical simulations), which forms mixed electric and magnetic polarizability.
Exchange magnon induced resistance asymmetry in permalloy spin-Hall oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenfeld, S.; Walter Schottky Institut and Physik-Department, Technische Universität München, 85748 Garching; Tshitoyan, V.
2016-05-09
We investigate magnetization dynamics in a spin-Hall oscillator using a direct current measurement as well as conventional microwave spectrum analysis. When the current applies an anti-damping spin-transfer torque, we observe a change in resistance which we ascribe mainly to the excitation of incoherent exchange magnons. A simple model is developed based on the reduction of the effective saturation magnetization, quantitatively explaining the data. The observed phenomena highlight the importance of exchange magnons on the operation of spin-Hall oscillators.
A general model for the scaling of offspring size and adult size.
Falster, Daniel S; Moles, Angela T; Westoby, Mark
2008-09-01
Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.
Strength of signal: a fundamental mechanism for cell fate specification.
Hayes, Sandra M; Love, Paul E
2006-02-01
How equipotent cells develop into complex tissues containing many diverse cell types is still a mystery. However, evidence is accumulating from different tissue systems in multiple organisms that many of the specific receptor families known to regulate cell fate decisions target conserved signaling pathways. A mechanism for preserving specificity in the cellular response that has emerged from these studies is one in which quantitative differences in receptor signaling regulate the cell fate decision. A signal strength model has recently gained support as a means to explain alphabeta/gammadelta lineage commitment. In this review, we compare the alphabeta/gammadelta fate decision with other cell fate decisions that occur outside of the lymphoid system to attain a better picture of the quantitative signaling mechanism for cell fate specification.
Woodhead, Jeffrey L; Paech, Franziska; Maurer, Martina; Engelhardt, Marc; Schmitt-Hoffmann, Anne H; Spickermann, Jochen; Messner, Simon; Wind, Mathias; Witschi, Anne-Therese; Krähenbühl, Stephan; Siler, Scott Q; Watkins, Paul B; Howell, Brett A
2018-06-07
Elevations of liver enzymes have been observed in clinical trials with BAL30072, a novel antibiotic. In vitro assays have identified potential mechanisms for the observed hepatotoxicity, including electron transport chain (ETC) inhibition and reactive oxygen species (ROS) generation. DILIsym, a quantitative systems pharmacology (QSP) model of drug-induced liver injury, has been used to predict the likelihood that each mechanism explains the observed toxicity. DILIsym was also used to predict the safety margin for a novel BAL30072 dosing scheme; it was predicted to be low. DILIsym was then used to recommend potential modifications to this dosing scheme; weight-adjusted dosing and a requirement to assay plasma alanine aminotransferase (ALT) daily and stop dosing as soon as ALT increases were observed improved the predicted safety margin of BAL30072 and decreased the predicted likelihood of severe injury. This research demonstrates a potential application for QSP modeling in improving the safety profile of candidate drugs. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Transition-state theory predicts clogging at the microscale
NASA Astrophysics Data System (ADS)
Laar, T. Van De; Klooster, S. Ten; Schroën, K.; Sprakel, J.
2016-06-01
Clogging is one of the main failure mechanisms encountered in industrial processes such as membrane filtration. Our understanding of the factors that govern the build-up of fouling layers and the emergence of clogs is largely incomplete, so that prevention of clogging remains an immense and costly challenge. In this paper we use a microfluidic model combined with quantitative real-time imaging to explore the influence of pore geometry and particle interactions on suspension clogging in constrictions, two crucial factors which remain relatively unexplored. We find a distinct dependence of the clogging rate on the entrance angle to a membrane pore which we explain quantitatively by deriving a model, based on transition-state theory, which describes the effect of viscous forces on the rate with which particles accumulate at the channel walls. With the same model we can also predict the effect of the particle interaction potential on the clogging rate. In both cases we find excellent agreement between our experimental data and theory. A better understanding of these clogging mechanisms and the influence of design parameters could form a stepping stone to delay or prevent clogging by rational membrane design.
The role of proton precipitation in Jovian aurora: Theory and observation
NASA Technical Reports Server (NTRS)
Waite, J. H., Jr.; Curran, D. B.; Cravens, T. E.; Clarke, J. T.
1992-01-01
It was proposed that the Jovian auroral emissions observed by Voyager spacecraft could be explained by energetic protons precipitating into the upper atmosphere of Jupiter. Such precipitation of energetic protons results in Doppler-shifted Lyman alpha emission that can be quantitatively analyzed to determine the energy flux and energy distribution of the incoming particle beam. Modeling of the expected emission from a reasonably chosen Voyager energetic proton spectrum can be used in conjunction with International Ultraviolet Explorer (IUE) observations, which show a relative lack of red-shifted Lyman alpha emission, to set upper limits on the amount of proton precipitation taking place in the Jovian aurora. Such calculations indicate that less than 10 percent of the ultraviolet auroral emissions at Jupiter can be explained by proton precipitation.
Elucidating the role of transcription in shaping the 3D structure of the bacterial genome
NASA Astrophysics Data System (ADS)
Brandao, Hugo B.; Wang, Xindan; Rudner, David Z.; Mirny, Leonid
Active transcription has been linked to several genome conformation changes in bacteria, including the recruitment of chromosomal DNA to the cell membrane and formation of nucleoid clusters. Using genomic and imaging data as input into mathematical models and polymer simulations, we sought to explore the extent to which bacterial 3D genome structure could be explained by 1D transcription tracks. Using B. subtilis as a model organism, we investigated via polymer simulations the role of loop extrusion and DNA super-coiling on the formation of interaction domains and other fine-scale features that are visible in chromosome conformation capture (Hi-C) data. We then explored the role of the condensin structural maintenance of chromosome complex on the alignment of chromosomal arms. A parameter-free transcription traffic model demonstrated that mean chromosomal arm alignment can be quantitatively explained, and the effects on arm alignment in genomically rearranged strains of B. subtilis were accurately predicted. H.B. acknowledges support from the Natural Sciences and Engineering Research Council of Canada for a PGS-D fellowship.
NASA Astrophysics Data System (ADS)
Wang, Hui; Wang, Jian-Tao; Cao, Ze-Xian; Zhang, Wen-Jun; Lee, Chun-Sing; Lee, Shuit-Tong; Zhang, Xiao-Hong
2015-03-01
While the vapour-liquid-solid process has been widely used for growing one-dimensional nanostructures, quantitative understanding of the process is still far from adequate. For example, the origins for the growth of periodic one-dimensional nanostructures are not fully understood. Here we observe that morphologies in a wide range of periodic one-dimensional nanostructures can be described by two quantitative relationships: first, inverse of the periodic spacing along the length direction follows an arithmetic sequence; second, the periodic spacing in the growth direction varies linearly with the diameter of the nanostructure. We further find that these geometric relationships can be explained by considering the surface curvature oscillation of the liquid sphere at the tip of the growing nanostructure. The work reveals the requirements of vapour-liquid-solid growth. It can be applied for quantitative understanding of vapour-liquid-solid growth and to design experiments for controlled growth of nanostructures with custom-designed morphologies.
Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana
2013-10-30
In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolejko, Krzysztof; Celerier, Marie-Noeelle; Laboratoire Univers et Theories
We use different particular classes of axially symmetric Szekeres Swiss-cheese models for the study of the apparent dimming of the supernovae of type Ia. We compare the results with those obtained in the corresponding Lemaitre-Tolman Swiss-cheese models. Although the quantitative picture is different the qualitative results are comparable, i.e., one cannot fully explain the dimming of the supernovae using small-scale ({approx}50 Mpc) inhomogeneities. To fit successfully the data we need structures of order of 500 Mpc size or larger. However, this result might be an artifact due to the use of axial light rays in axially symmetric models. Anyhow, thismore » work is a first step in trying to use Szekeres Swiss-cheese models in cosmology and it will be followed by the study of more physical models with still less symmetry.« less
Fertility Decline in Rural China: A Comparative Analysis
Harrell, Stevan; Yuesheng, Wang; Hua, Han; Santos, Gonçalo D.; Yingying, Zhou
2014-01-01
Many models have been proposed to explain both the rapidity of China’s fertility decline after the 1960s and the differential timing of the decline in different places. In particular, scholars argue over whether deliberate policies of fertility control, institutional changes, or general modernization factors contribute most to changes in fertility behavior. Here the authors adopt an ethnographically grounded behavioral–institutional approach to analyze qualitative and quantitative data from three different rural settings: Xiaoshan County in Zhejiang (East China), Ci County in Hebei (North China), and Yingde County in Guangdong (South China). The authors show that no one set of factors explains the differential timing and rapidity of the fertility decline in the three areas; rather they must explain differential timing by a combination of differences in social–cultural environments (e.g., spread of education, reproductive ideologies, and gender relations) and politico-economic conditions (e.g., economic development, birth planning campaigns, and collective systems of labor organization) during the early phases of the fertility decline. PMID:21319442
An exploratory sequential design to validate measures of moral emotions.
Márquez, Margarita G; Delgado, Ana R
2017-05-01
This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.
A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma
NASA Technical Reports Server (NTRS)
Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.
2004-01-01
A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.
[Some comments on ecological field].
Wang, D
2000-06-01
Based on the data of plant ecological field studies, this paper reviewed the conception of ecological field, field eigenfunctions, graphs of ecological field and its application of ecological field theory in explaining plant interactions. It is suggested that the basic character of ecological field is material, and based on the current research level, it is not sure whether ecological field is a kind of specific field different from general physical field. The author gave some comments on the formula and estimation of parameters of basic field function-ecological potential model on ecological field. Both models have their own characteristics and advantages in specific conditions. The author emphasized that ecological field had even more meaning of ecological methodology, and applying ecological field theory in describing the types and processes of plant interactions had three characteristics: quantitative, synthetic and intuitionistic. Field graphing might provide a new way to ecological studies, especially applying the ecological field theory might give an appropriate quantitative explanation for the dynamic process of plant populations (coexistence and interference competition).
Multi-keV x-ray sources from metal-lined cylindrical hohlraums
NASA Astrophysics Data System (ADS)
Jacquet, L.; Girard, F.; Primout, M.; Villette, B.; Stemmler, Ph.
2012-08-01
As multi-keV x-ray sources, plastic hohlraums with inner walls coated with titanium, copper, and germanium have been fired on Omega in September 2009. For all the targets, the measured and calculated multi-keV x-ray power time histories are in a good qualitative agreement. In the same irradiation conditions, measured multi-keV x-ray conversion rates are ˜6%-8% for titanium, ˜2% for copper, and ˜0.5% for germanium. For titanium and copper hohlraums, the measured conversion rates are about two times higher than those given by hydroradiative computations. Conversely, for the germanium hohlraum, a rather good agreement is found between measured and computed conversion rates. To explain these findings, multi-keV integrated emissivities calculated with RADIOM [M. Busquet, Phys. Fluids 85, 4191 (1993)], the nonlocal-thermal-equilibrium atomic physics model used in our computations, have been compared to emissivities obtained from different other models. These comparisons provide an attractive way to explain the discrepancies between experimental and calculated quantitative results.
A disassembly-driven mechanism explains F-actin-mediated chromosome transport in starfish oocytes
Bun, Philippe; Dmitrieff, Serge; Belmonte, Julio M
2018-01-01
While contraction of sarcomeric actomyosin assemblies is well understood, this is not the case for disordered networks of actin filaments (F-actin) driving diverse essential processes in animal cells. For example, at the onset of meiosis in starfish oocytes a contractile F-actin network forms in the nuclear region transporting embedded chromosomes to the assembling microtubule spindle. Here, we addressed the mechanism driving contraction of this 3D disordered F-actin network by comparing quantitative observations to computational models. We analyzed 3D chromosome trajectories and imaged filament dynamics to monitor network behavior under various physical and chemical perturbations. We found no evidence of myosin activity driving network contractility. Instead, our observations are well explained by models based on a disassembly-driven contractile mechanism. We reconstitute this disassembly-based contractile system in silico revealing a simple architecture that robustly drives chromosome transport to prevent aneuploidy in the large oocyte, a prerequisite for normal embryonic development. PMID:29350616
Cell size control and homeostasis in bacteria
NASA Astrophysics Data System (ADS)
Bradde, Serena; Taheri, Sattar; Sauls, John; Hill, Nobert; Levine, Petra; Paulsson, Johan; Vergassola, Massimo; Jun, Suckjoon
2015-03-01
How cells control their size is a fundamental question in biology. The mechanisms for sensing size, time, or a combination of the two are not supported by experimental evidence. By analysing distributions of size at division at birth and generation time of hundreds of thousands of Gram-negative E. coli and Gram-positive B. subtilis cells under a wide range of tightly controlled steady-state growth conditions, we are now in the position to validate different theoretical models. In this talk I will present all possible models in details and present a general mechanism that quantitatively explains all measurable aspects of growth and cell division at both population and single-cell levels.
Rationalizing the light-induced phase separation of mixed halide organic-inorganic perovskites.
Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon; Brennan, Michael C; Morozov, Yurii V; Manser, Joseph S; Kamat, Prashant V; Schneider, William F; Kuno, Masaru
2017-08-04
Mixed halide hybrid perovskites, CH 3 NH 3 Pb(I 1-x Br x ) 3 , represent good candidates for low-cost, high efficiency photovoltaic, and light-emitting devices. Their band gaps can be tuned from 1.6 to 2.3 eV, by changing the halide anion identity. Unfortunately, mixed halide perovskites undergo phase separation under illumination. This leads to iodide- and bromide-rich domains along with corresponding changes to the material's optical/electrical response. Here, using combined spectroscopic measurements and theoretical modeling, we quantitatively rationalize all microscopic processes that occur during phase separation. Our model suggests that the driving force behind phase separation is the bandgap reduction of iodide-rich phases. It additionally explains observed non-linear intensity dependencies, as well as self-limited growth of iodide-rich domains. Most importantly, our model reveals that mixed halide perovskites can be stabilized against phase separation by deliberately engineering carrier diffusion lengths and injected carrier densities.Mixed halide hybrid perovskites possess tunable band gaps, however, under illumination they undergo phase separation. Using spectroscopic measurements and theoretical modelling, Draguta and Sharia et al. quantitatively rationalize the microscopic processes that occur during phase separation.
Biological activity of aldose reductase and lipophilicity of pyrrolyl-acetic acid derivatives
NASA Astrophysics Data System (ADS)
Kumari, A.; Kumari, R.; Kumar, R.; Gupta, M.
2011-12-01
Quantitative Structure-Activity Relationship modeling is a powerful approach for correlating an organic compound to its lipophilicity. In this paper QSAR models are established for estimation of correlation of the lipophilicity of a series of pyrrolyl-acetic acid derivatives, inhibitors of the aldose reductase enzyme, in the n-octanol-water system with biological activity of aldose reductase. Lipophilicity, expressed by the logarithm of n-octnol-water partition coefficient log P and biological activity of aldose reductase inhibitory activity by log it. Result obtained by QSAR modeling of compound series reveal a definite trend in biological activity and a further improvement in quantitative relationships are established if, beside log P, Hammett electronic constant σ and connectivity index chi-3 (3 χ) term included in the regression equation. The tri-parametric model with log P, 3 χ and σ as correlating parameters have been found to be the best which gives a variance of 87% ( R 2 = 0.8743). A compound has been found to be serious outlier and when the same has been excluded the model explains about 94% variance of the data set ( R 2 = 0.9447). The topological index (3 χ) has been found to be a good parameter for modeling the biological activity.
Diffusion rate limitations in actin-based propulsion of hard and deformable particles.
Dickinson, Richard B; Purich, Daniel L
2006-08-15
The mechanism by which actin polymerization propels intracellular vesicles and invasive microorganisms remains an open question. Several recent quantitative studies have examined propulsion of biomimetic particles such as polystyrene microspheres, phospholipid vesicles, and oil droplets. In addition to allowing quantitative measurement of parameters such as the dependence of particle speed on its size, these systems have also revealed characteristic behaviors such a saltatory motion of hard particles and oscillatory deformation of soft particles. Such measurements and observations provide tests for proposed mechanisms of actin-based motility. In the actoclampin filament end-tracking motor model, particle-surface-bound filament end-tracking proteins are involved in load-insensitive processive insertion of actin subunits onto elongating filament plus-ends that are persistently tethered to the surface. In contrast, the tethered-ratchet model assumes working filaments are untethered and the free-ended filaments grow as thermal ratchets in a load-sensitive manner. This article presents a model for the diffusion and consumption of actin monomers during actin-based particle propulsion to predict the monomer concentration field around motile particles. The results suggest that the various behaviors of biomimetic particles, including dynamic saltatory motion of hard particles and oscillatory vesicle deformations, can be quantitatively and self-consistently explained by load-insensitive, diffusion-limited elongation of (+)-end-tethered actin filaments, consistent with predictions of the actoclampin filament-end tracking mechanism.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Quantitative Genetic Modeling of the Parental Care Hypothesis for the Evolution of Endothermy
Bacigalupe, Leonardo D.; Moore, Allen J.; Nespolo, Roberto F.; Rezende, Enrico L.; Bozinovic, Francisco
2017-01-01
There are two heuristic explanations proposed for the evolution of endothermy in vertebrates: a correlated response to selection for stable body temperatures, or as a correlated response to increased activity. Parental care has been suggested as a major driving force in this context given its impact on the parents' activity levels and energy budgets, and in the offspring's growth rates due to food provisioning and controlled incubation temperature. This results in a complex scenario involving multiple traits and transgenerational fitness benefits that can be hard to disentangle, quantify and ultimately test. Here we demonstrate how standard quantitative genetic models of maternal effects can be applied to study the evolution of endothermy, focusing on the interplay between daily energy expenditure (DEE) of the mother and growth rates of the offspring. Our model shows that maternal effects can dramatically exacerbate evolutionary responses to selection in comparison to regular univariate models (breeder's equation). This effect would emerge from indirect selection mediated by maternal effects concomitantly with a positive genetic covariance between DEE and growth rates. The multivariate nature of selection, which could favor a higher DEE, higher growth rates or both, might partly explain how high turnover rates were continuously favored in a self-reinforcing process. Overall, our quantitative genetic analysis provides support for the parental care hypothesis for the evolution of endothermy. We contend that much has to be gained from quantifying maternal and developmental effects on metabolic and thermoregulatory variation during adulthood. PMID:29311952
Quantitative Reasoning in Problem Solving
ERIC Educational Resources Information Center
Ramful, Ajay; Ho, Siew Yin
2015-01-01
In this article, Ajay Ramful and Siew Yin Ho explain the meaning of quantitative reasoning, describing how it is used in the to solve mathematical problems. They also describe a diagrammatic approach to represent relationships among quantities and provide examples of problems and their solutions.
Young adults' decision making surrounding heavy drinking: a multi-staged model of planned behaviour.
Northcote, Jeremy
2011-06-01
This paper examines the real life contexts in which decisions surrounding heavy drinking are made by young adults (that is, on occasions when five or more alcoholic drinks are consumed within a few hours). It presents a conceptual model that views such decision making as a multi-faceted and multi-staged process. The mixed method study draws on purposive data gathered through direct observation of eight social networks consisting of 81 young adults aged between 18 and 25 years in Perth, Western Australia, including in-depth interviews with 31 participants. Qualitative and some basic quantitative data were gathered using participant observation and in-depth interviews undertaken over an eighteen month period. Participants explained their decision to engage in heavy drinking as based on a variety of factors. These elements relate to socio-cultural norms and expectancies that are best explained by the theory of planned behaviour. A framework is proposed that characterises heavy drinking as taking place in a multi-staged manner, with young adults having: 1. A generalised orientation to the value of heavy drinking shaped by wider influences and norms; 2. A short-term orientation shaped by situational factors that determines drinking intentions for specific events; and 3. An evaluative orientation shaped by moderating factors. The value of qualitative studies of decision making in real life contexts is advanced to complement the mostly quantitative research that dominates research on alcohol decision making. Copyright © 2011 Elsevier Ltd. All rights reserved.
Szekeres Swiss-cheese model and supernova observations
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof; Célérier, Marie-Noëlle
2010-11-01
We use different particular classes of axially symmetric Szekeres Swiss-cheese models for the study of the apparent dimming of the supernovae of type Ia. We compare the results with those obtained in the corresponding Lemaître-Tolman Swiss-cheese models. Although the quantitative picture is different the qualitative results are comparable, i.e., one cannot fully explain the dimming of the supernovae using small-scale (˜50Mpc) inhomogeneities. To fit successfully the data we need structures of order of 500 Mpc size or larger. However, this result might be an artifact due to the use of axial light rays in axially symmetric models. Anyhow, this work is a first step in trying to use Szekeres Swiss-cheese models in cosmology and it will be followed by the study of more physical models with still less symmetry.
A MODEL FOR INTERFACE DYNAMOS IN LATE K AND EARLY M DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullan, D. J.; MacDonald, J.; Houdebine, E. R., E-mail: mullan@udel.edu
2015-09-10
Measurements of the equivalent width EW(CaK) of emission in the Ca ii K line have been obtained by Houdebine et al. for stars with spectral types from dK5 to dM4. In order to explain the observed variations of EW(CaK) with spectral sub-type, we propose a quantitative model of interface dynamos in low-mass stars. Our model leads to surface field strengths B{sub s} which turn out to be essentially linearly proportional to EW(CaK). This result is reminiscent of the Sun, where Skumanich et al. found that the intensity of CaK emission in solar active regions is linearly proportional to the localmore » field strength.« less
Agent-based modeling: case study in cleavage furrow models.
Mogilner, Alex; Manhart, Angelika
2016-11-07
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. © 2016 Mogilner and Manhart. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Learning physical descriptors for materials science by compressed sensing
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Vybiral, Jan; Ahmetcik, Emre; Ouyang, Runhai; Levchenko, Sergey V.; Draxl, Claudia; Scheffler, Matthias
2017-02-01
The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.
'Single molecule': theory and experiments, an introduction
2013-01-01
At scales below micrometers, Brownian motion dictates most of the behaviors. The simple observation of a colloid is striking: a permanent and random motion is seen, whereas inertial forces play a negligible role. This Physics, where velocity is proportional to force, has opened new horizons in biology. The random feature is challenged in living systems where some proteins - molecular motors - have a directed motion whereas their passive behaviors of colloid should lead to a Brownian motion. Individual proteins, polymers of living matter such as DNA, RNA, actin or microtubules, molecular motors, all these objects can be viewed as chains of colloids. They are submitted to shocks from molecules of the solvent. Shapes taken by these biopolymers or dynamics imposed by motors can be measured and modeled from single molecules to their collective effects. Thanks to the development of experimental methods such as optical tweezers, Atomic Force Microscope (AFM), micropipettes, and quantitative fluorescence (such as Förster Resonance Energy Transfer, FRET), it is possible to manipulate these individual biomolecules in an unprecedented manner: experiments allow to probe the validity of models; and a new Physics has thereby emerged with original biological insights. Theories based on statistical mechanics are needed to explain behaviors of these systems. When force-extension curves of these molecules are extracted, the curves need to be fitted with models that predict the deformation of free objects or submitted to a force. When velocity of motors is altered, a quantitative analysis is required to explain the motions of individual molecules under external forces. This lecture will give some elements of introduction to the lectures of the session 'Nanophysics for Molecular Biology'. PMID:24565227
Shuryak, Igor; Dadachova, Ekaterina
2016-01-01
Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental conditions (e.g. fluctuations of temperature and/or nutrient levels) coincide with radioactive contamination; (2) an organism’s radioresistance and bioremediation efficiency in rich laboratory media may be insufficient to carry out radionuclide bioremediation in the field—robustness against multiple stressors is needed. PMID:26808049
Quantitative Evaluation of Musical Scale Tunings
ERIC Educational Resources Information Center
Hall, Donald E.
1974-01-01
The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)
Agent based modeling of the coevolution of hostility and pacifism
NASA Astrophysics Data System (ADS)
Dalmagro, Fermin; Jimenez, Juan
2015-01-01
We propose a model based on a population of agents whose states represent either hostile or peaceful behavior. Randomly selected pairs of agents interact according to a variation of the Prisoners Dilemma game, and the probabilities that the agents behave aggressively or not are constantly updated by the model so that the agents that remain in the game are those with the highest fitness. We show that the population of agents oscillate between generalized conflict and global peace, without either reaching a stable state. We then use this model to explain some of the emergent behaviors in collective conflicts, by comparing the simulated results with empirical data obtained from social systems. In particular, using public data reports we show how the model precisely reproduces interesting quantitative characteristics of diverse types of armed conflicts, public protests, riots and strikes.
Delbruck Prize Award: Insights into HIV Dynamics and Cure
NASA Astrophysics Data System (ADS)
Perelson, Alan S.
A large effort is being made to find a means to cure HIV infection. I will present a dynamical model of a phenomenon called post-treatment control (PTC) or functional cure of HIV-infection in which some patients treated with suppressive antiviral therapy have been taken off of therapy and then spontaneously control HIV infection. The model relies on an immune response and bistability to explain PTC. I will then generalize the model to explicitly include immunotherapy with monoclonal antibodies approved for use in cancer to show that one can induce PTC with a limited number of antibody infusions and compare model predictions with experiments in SIV infected macaques given immunotherapy. Lastly, I will argue that quantitative insights derived from models of HIV infection have and will continue to play an important role in medicine.
TEMIME, L.; HEJBLUM, G.; SETBON, M.; VALLERON, A. J.
2008-01-01
SUMMARY Mathematical modelling of infectious diseases has gradually become part of public health decision-making in recent years. However, the developing status of modelling in epidemiology and its relationship with other relevant scientific approaches have never been assessed quantitatively. Herein, using antibiotic resistance as a case study, 60 published models were analysed. Their interactions with other scientific fields are reported and their citation impact evaluated, as well as temporal trends. The yearly number of antibiotic resistance modelling publications increased significantly between 1990 and 2006. This rise cannot be explained by the surge of interest in resistance phenomena alone. Moreover, modelling articles are, on average, among the most frequently cited third of articles from the journal in which they were published. The results of this analysis, which might be applicable to other emerging public health problems, demonstrate the growing interest in mathematical modelling approaches to evaluate antibiotic resistance. PMID:17767792
Toward inflation models compatible with the no-boundary proposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Dong-il; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: innocent.yeom@gmail.com
2014-06-01
In this paper, we investigate various inflation models in the context of the no-boundary proposal. We propose that a good inflation model should satisfy three conditions: observational constraints, plausible initial conditions, and naturalness of the model. For various inflation models, we assign the probability to each initial condition using the no-boundary proposal and define a quantitative standard, typicality, to check whether the model satisfies the observational constraints with probable initial conditions. There are three possible ways to satisfy the typicality criterion: there was pre-inflation near the high energy scale, the potential is finely tuned or the inflationary field space ismore » unbounded, or there are sufficient number of fields that contribute to inflation. The no-boundary proposal rejects some of naive inflation models, explains some of traditional doubts on inflation, and possibly, can have observational consequences.« less
Min, Kyoung Ah; Talattof, Arjang; Tsume, Yasuhiro; Stringer, Kathleen A; Yu, Jing-Yu; Lim, Dong Hyun; Rosania, Gus R
2013-08-01
We sought to identify key variables in cellular architecture and physiology that might explain observed differences in the passive transport properties of small molecule drugs across different airway epithelial cell types. Propranolol (PR) was selected as a weakly basic, model compound to compare the transport properties of primary (NHBE) vs. tumor-derived (Calu-3) cells. Differentiated on Transwell™ inserts, the architecture of pure vs. mixed cell co-cultures was studied with confocal microscopy followed by quantitative morphometric analysis. Cellular pharmacokinetic modeling was used to identify parameters that differentially affect PR uptake and transport across these two cell types. Pure Calu-3 and NHBE cells possessed different structural and functional properties. Nevertheless, mixed Calu-3 and NHBE cell co-cultures differentiated as stable cell monolayers. After measuring the total mass of PR, the fractional areas covered by Calu-3 and NHBE cells allowed deconvoluting the transport properties of each cell type. Based on the apparent thickness of the unstirred, cell surface aqueous layer, local differences in the extracellular microenvironment explained the measured variations in passive PR uptake and permeation between Calu-3 and NHBE cells. Mixed cell co-cultures can be used to compare the local effects of the extracellular microenvironment on drug uptake and transport across two epithelial cell types.
Min, Kyoung Ah; Talattof, Arjang; Tsume, Yasuhiro; Stringer, Kathleen A.; Yu, Jing-yu; Lim, Dong Hyun; Rosania, Gus R.
2013-01-01
Purpose We sought to identify key variables in cellular architecture and physiology that might explain observed differences in the passive transport properties of small molecule drugs across different airway epithelial cell types. Methods Propranolol (PR) was selected as a weakly basic, model compound to compare the transport properties of primary (NHBE) vs. tumor-derived (Calu-3) cells. Differentiated on Transwell™ inserts, the architecture of pure vs. mixed cell co-cultures was studied with confocal microscopy followed by quantitative morphometric analysis. Cellular pharmacokinetic modeling was used to identify parameters that differentially affect PR uptake and transport across these two cell types. Results Pure Calu-3 and NHBE cells possessed different structural and functional properties. Nevertheless, mixed Calu-3 and NHBE cell co-cultures differentiated as stable cell monolayers. After measuring the total mass of PR, the fractional areas covered by Calu-3 and NHBE cells allowed deconvoluting the transport properties of each cell type. Based on the apparent thickness of the unstirred, cell surface aqueous layer, local differences in extracellular microenvironment explained the measured variations in passive PR uptake and permeation between Calu-3 and NHBE cells. Conclusion Mixed cell co-cultures can be used to compare the local effects of the extracellular microenvironment on drug uptake and transport across two epithelial cell types. PMID:23708857
Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.
2016-01-20
The initial set of candidate hypotheses provides a useful starting point for quantitative modeling and adaptive management of the river and species. We anticipate that hypotheses will change from the set of working management hypotheses as adaptive management progresses. More importantly, hypotheses that have been filtered out of our multistep process are still being considered. These filtered hypotheses are archived and if existing hypotheses are determined to be inadequate to explain observed population dynamics, new hypotheses can be created or filtered hypotheses can be reinstated.
Common Lognormal Behavior in Legal Systems
NASA Astrophysics Data System (ADS)
Yamamoto, Ken
2017-07-01
This study characterizes a statistical property of legal systems: the distribution of the number of articles in a law follows a lognormal distribution. This property is common to the Japanese, German, and Singaporean laws. To explain this lognormal behavior, tree structure of the law is analyzed. If the depth of a tree follows a normal distribution, the lognormal distribution of the number of articles can be theoretically derived. We analyze the structure of the Japanese laws using chapters, sections, and other levels of organization, and this analysis demonstrates that the proposed model is quantitatively reasonable.
Analog of small Holstein polaron in hydrogen-bonded amide systems
NASA Astrophysics Data System (ADS)
Alexander, D. M.
1985-01-01
A class of amide-I (C = O stretch) related excitations and their contribution to the spectral function for infrared absorption is determined by use of the Davydov Hamiltonian. The treatment is a fully quantum, finite-temperature one. A consistent picture and a quantitative fit to the absorption data for crystalline acetanilide confirms that the model adequately explains the anomalous behavior cited by Careri et al. The localized excitation responsible for this behavior is the vibronic analog of the small Holstein polaron. The possible extension to other modes and biological relevance is examined.
Towards a neuro-computational account of prism adaptation.
Petitet, Pierre; O'Reilly, Jill X; O'Shea, Jacinta
2017-12-14
Prism adaptation has a long history as an experimental paradigm used to investigate the functional and neural processes that underlie sensorimotor control. In the neuropsychology literature, prism adaptation behaviour is typically explained by reference to a traditional cognitive psychology framework that distinguishes putative functions, such as 'strategic control' versus 'spatial realignment'. This theoretical framework lacks conceptual clarity, quantitative precision and explanatory power. Here, we advocate for an alternative computational framework that offers several advantages: 1) an algorithmic explanatory account of the computations and operations that drive behaviour; 2) expressed in quantitative mathematical terms; 3) embedded within a principled theoretical framework (Bayesian decision theory, state-space modelling); 4) that offers a means to generate and test quantitative behavioural predictions. This computational framework offers a route towards mechanistic neurocognitive explanations of prism adaptation behaviour. Thus it constitutes a conceptual advance compared to the traditional theoretical framework. In this paper, we illustrate how Bayesian decision theory and state-space models offer principled explanations for a range of behavioural phenomena in the field of prism adaptation (e.g. visual capture, magnitude of visual versus proprioceptive realignment, spontaneous recovery and dynamics of adaptation memory). We argue that this explanatory framework can advance understanding of the functional and neural mechanisms that implement prism adaptation behaviour, by enabling quantitative tests of hypotheses that go beyond merely descriptive mapping claims that 'brain area X is (somehow) involved in psychological process Y'. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sariaslan, A; Larsson, H; Fazel, S
2016-01-01
Patients diagnosed with psychotic disorders (for example, schizophrenia and bipolar disorder) have elevated risks of committing violent acts, particularly if they are comorbid with substance misuse. Despite recent insights from quantitative and molecular genetic studies demonstrating considerable pleiotropy in the genetic architecture of these phenotypes, there is currently a lack of large-scale studies that have specifically examined the aetiological links between psychotic disorders and violence. Using a sample of all Swedish individuals born between 1958 and 1989 (n=3 332 101), we identified a total of 923 259 twin-sibling pairs. Patients were identified using the National Patient Register using validated algorithms based on International Classification of Diseases (ICD) 8–10. Univariate quantitative genetic models revealed that all phenotypes (schizophrenia, bipolar disorder, substance misuse, and violent crime) were highly heritable (h2=53–71%). Multivariate models further revealed that schizophrenia was a stronger predictor of violence (r=0.32; 95% confidence interval: 0.30–0.33) than bipolar disorder (r=0.23; 0.21–0.25), and large proportions (51–67%) of these phenotypic correlations were explained by genetic factors shared between each disorder, substance misuse, and violence. Importantly, we found that genetic influences that were unrelated to substance misuse explained approximately a fifth (21% 20–22%) of the correlation with violent criminality in bipolar disorder but none of the same correlation in schizophrenia (Pbipolar disorder<0.001; Pschizophrenia=0.55). These findings highlight the problems of not disentangling common and unique sources of covariance across genetically similar phenotypes as the latter sources may include aetiologically important clues. Clinically, these findings underline the importance of assessing risk of different phenotypes together and integrating interventions for psychiatric disorders, substance misuse, and violence. PMID:26666206
Vasconcelos, A G; Almeida, R M; Nobre, F F
2001-08-01
This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.
A population genetic interpretation of GWAS findings for human quantitative traits
Bullaughey, Kevin; Hudson, Richard R.; Sella, Guy
2018-01-01
Human genome-wide association studies (GWASs) are revealing the genetic architecture of anthropomorphic and biomedical traits, i.e., the frequencies and effect sizes of variants that contribute to heritable variation in a trait. To interpret these findings, we need to understand how genetic architecture is shaped by basic population genetics processes—notably, by mutation, natural selection, and genetic drift. Because many quantitative traits are subject to stabilizing selection and because genetic variation that affects one trait often affects many others, we model the genetic architecture of a focal trait that arises under stabilizing selection in a multidimensional trait space. We solve the model for the phenotypic distribution and allelic dynamics at steady state and derive robust, closed-form solutions for summary statistics of the genetic architecture. Our results provide a simple interpretation for missing heritability and why it varies among traits. They predict that the distribution of variances contributed by loci identified in GWASs is well approximated by a simple functional form that depends on a single parameter: the expected contribution to genetic variance of a strongly selected site affecting the trait. We test this prediction against the results of GWASs for height and body mass index (BMI) and find that it fits the data well, allowing us to make inferences about the degree of pleiotropy and mutational target size for these traits. Our findings help to explain why the GWAS for height explains more of the heritable variance than the similarly sized GWAS for BMI and to predict the increase in explained heritability with study sample size. Considering the demographic history of European populations, in which these GWASs were performed, we further find that most of the associations they identified likely involve mutations that arose shortly before or during the Out-of-Africa bottleneck at sites with selection coefficients around s = 10−3. PMID:29547617
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
Quantitative Experiments to Explain the Change of Seasons
ERIC Educational Resources Information Center
Testa, Italo; Busarello, Gianni; Puddu, Emanuella; Leccia, Silvio; Merluzzi, Paola; Colantonio, Arturo; Moretti, Maria Ida; Galano, Silvia; Zappia, Alessandro
2015-01-01
The science education literature shows that students have difficulty understanding what causes the seasons. Incorrect explanations are often due to a lack of knowledge about the physical mechanisms underlying this phenomenon. To address this, we present a module in which the students engage in quantitative measurements with a photovoltaic panel to…
Paradigm Privilege: Determining the Value of Research in Teacher Education Policy Making.
ERIC Educational Resources Information Center
Bales, Barbara L.
This paper explains that despite the long debate over the relative value of quantitative and qualitative educational research and attempts to talk across disciplines, quantitative research dominates educational policy circles. As a result, quality qualitative research may not enter into educational policy conversations. The paper discusses whether…
Teenagers' Explanations of Bullying
ERIC Educational Resources Information Center
Thornberg, Robert; Knutsen, Sven
2011-01-01
The aim of the present study was to explore how teenagers explain why bullying takes place at school, and whether there were any differences in explaining bullying due to gender and prior bullying experiences. One hundred and seventy-six Swedish students in Grade 9 responded to a questionnaire. Mixed methods (qualitative and quantitative methods)…
Jiang, Qi; Zeng, Huidan; Liu, Zhao; Ren, Jing; Chen, Guorong; Wang, Zhaofeng; Sun, Luyi; Zhao, Donghui
2013-09-28
Sodium borophosphate glasses exhibit intriguing mixed network former effect, with the nonlinear compositional dependence of their glass transition temperature as one of the most typical examples. In this paper, we establish the widely applicable topological constraint model of sodium borophosphate mixed network former glasses to explain the relationship between the internal structure and nonlinear changes of glass transition temperature. The application of glass topology network was discussed in detail in terms of the unified methodology for the quantitative distribution of each coordinated boron and phosphorus units and glass transition temperature dependence of atomic constraints. An accurate prediction of composition scaling of the glass transition temperature was obtained based on topological constraint model.
Thresholds and the rising pion inclusive cross section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, S.T.
In the context of the hypothesis of the Pomeron-f identity, it is shown that the rising pion inclusive cross section can be explained over a wide range of energies as a series of threshold effects. Low-mass thresholds are seen to be important. In order to understand the contributions of high-mass thresholds (flavoring), a simple two-channel multiperipheral model is examined. The analysis sheds light on the relation between thresholds and Mueller-Regge couplings. In particular, it is seen that inclusive-, and total-cross-section threshold mechanisms may differ. A quantitative model based on this idea and utilizing previous total-cross-section fits is seen to agreemore » well with experiment.« less
Genetics of common forms of heart failure: challenges and potential solutions.
Rau, Christoph D; Lusis, Aldons J; Wang, Yibin
2015-05-01
In contrast to many other human diseases, the use of genome-wide association studies (GWAS) to identify genes for heart failure (HF) has had limited success. We will discuss the underlying challenges as well as potential new approaches to understanding the genetics of common forms of HF. Recent research using intermediate phenotypes, more detailed and quantitative stratification of HF symptoms, founder populations and novel animal models has begun to allow researchers to make headway toward explaining the genetics underlying HF using GWAS techniques. By expanding analyses of HF to improved clinical traits, additional HF classifications and innovative model systems, the intractability of human HF GWAS should be ameliorated significantly.
A Simple Model for Immature Retrovirus Capsid Assembly
NASA Astrophysics Data System (ADS)
Paquay, Stefan; van der Schoot, Paul; Dragnea, Bogdan
In this talk I will present simulations of a simple model for capsomeres in immature virus capsids, consisting of only point particles with a tunable range of attraction constrained to a spherical surface. We find that, at sufficiently low density, a short interaction range is sufficient for the suppression of five-fold defects in the packing and causes instead larger tears and scars in the capsid. These findings agree both qualitatively and quantitatively with experiments on immature retrovirus capsids, implying that the structure of the retroviral protein lattice can, for a large part, be explained simply by the effective interaction between the capsomeres. We thank the HFSP for funding under Grant RGP0017/2012.
Measuring and modeling salience with the theory of visual attention.
Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid
2017-08-01
For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kolasa, Jurek; Allen, Craig R.; Sendzimir, Jan; Stow, Craig A.
2012-01-01
Interaction between habitat and species is central in ecology. Habitat structure may be conceived as being hierarchical, where larger, more diverse, portions or categories contain smaller, more homogeneous portions. When this conceptualization is combined with the observation that species have different abilities to relate to portions of the habitat that differ in their characteristics, a number of known patterns can be derived and new patterns hypothesized. We propose a quantitative form of this habitat–species relationship by considering species abundance to be a function of habitat specialization, habitat fragmentation, amount of habitat, and adult body mass. The model reproduces and explains patterns such as variation in rank–abundance curves, greater variation and extinction probabilities of habitat specialists, discontinuities in traits (abundance, ecological range, pattern of variation, body size) among species sharing a community or area, and triangular distribution of body sizes, among others. The model has affinities to Holling's textural discontinuity hypothesis and metacommunity theory but differs from both by offering a more general perspective. In support of the model, we illustrate its general potential to capture and explain several empirical observations that historically have been treated independently.
Reconciling intuitive physics and Newtonian mechanics for colliding objects.
Sanborn, Adam N; Mansinghka, Vikash K; Griffiths, Thomas L
2013-04-01
People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.
Understanding quantitative research: part 1.
Hoe, Juanita; Hoare, Zoë
This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research.
Altweck, Laura; Marshall, Tara C; Ferenczi, Nelli; Lefringhausen, Katharina
2015-01-01
Many families worldwide have at least one member with a behavioral or mental disorder, and yet the majority of the public fails to correctly recognize symptoms of mental illness. Previous research has found that Mental Health Literacy (MHL)-the knowledge and positive beliefs about mental disorders-tends to be higher in European and North American cultures, compared to Asian and African cultures. Nonetheless quantitative research examining the variables that explain this cultural difference remains limited. The purpose of our study was fourfold: (a) to validate measures of MHL cross-culturally, (b) to examine the MHL model quantitatively, (c) to investigate cultural differences in the MHL model, and (d) to examine collectivism as a predictor of MHL. We validated measures of MHL in European American and Indian samples. The results lend strong quantitative support to the MHL model. Recognition of symptoms of mental illness was a central variable: greater recognition predicted greater endorsement of social causes of mental illness and endorsement of professional help-seeking as well as lesser endorsement of lay help-seeking. The MHL model also showed an overwhelming cultural difference; namely, lay help-seeking beliefs played a central role in the Indian sample, and a negligible role in the European American sample. Further, collectivism was positively associated with causal beliefs of mental illness in the European American sample, and with lay help-seeking beliefs in the Indian sample. These findings demonstrate the importance of understanding cultural differences in beliefs about mental illness, particularly in relation to help-seeking beliefs.
Altweck, Laura; Marshall, Tara C.; Ferenczi, Nelli; Lefringhausen, Katharina
2015-01-01
Many families worldwide have at least one member with a behavioral or mental disorder, and yet the majority of the public fails to correctly recognize symptoms of mental illness. Previous research has found that Mental Health Literacy (MHL)—the knowledge and positive beliefs about mental disorders—tends to be higher in European and North American cultures, compared to Asian and African cultures. Nonetheless quantitative research examining the variables that explain this cultural difference remains limited. The purpose of our study was fourfold: (a) to validate measures of MHL cross-culturally, (b) to examine the MHL model quantitatively, (c) to investigate cultural differences in the MHL model, and (d) to examine collectivism as a predictor of MHL. We validated measures of MHL in European American and Indian samples. The results lend strong quantitative support to the MHL model. Recognition of symptoms of mental illness was a central variable: greater recognition predicted greater endorsement of social causes of mental illness and endorsement of professional help-seeking as well as lesser endorsement of lay help-seeking. The MHL model also showed an overwhelming cultural difference; namely, lay help-seeking beliefs played a central role in the Indian sample, and a negligible role in the European American sample. Further, collectivism was positively associated with causal beliefs of mental illness in the European American sample, and with lay help-seeking beliefs in the Indian sample. These findings demonstrate the importance of understanding cultural differences in beliefs about mental illness, particularly in relation to help-seeking beliefs. PMID:26441699
NASA Astrophysics Data System (ADS)
Sano, Yuji; Takahata, Naoto; Kagoshima, Takanori; Shibata, Tomo; Onoue, Tetsuji; Zhao, Dapeng
2016-11-01
Geochemical monitoring of groundwater and soil gas emission pointed out precursor and/or coseismic anomalies of noble gases associated with earthquakes, but there was lack of plausible physico-chemical basis. A laboratory experiment of rock fracturing and noble gas emission was conducted, but there is no quantitative connection between the laboratory results and observation in field. We report here deep groundwater helium anomalies related to the 2016 Kumamoto earthquake, which is an inland crustal earthquake with a strike-slip fault and a shallow hypocenter (10 km depth) close to highly populated areas in Southwest Japan. The observed helium isotope changes, soon after the earthquake, are quantitatively coupled with volumetric strain changes estimated from a fault model, which can be explained by experimental studies of helium degassing during compressional loading of rock samples. Groundwater helium is considered as an effective strain gauge. This suggests the first quantitative linkage between geochemical and seismological observations and may open the possibility to develop a new monitoring system to detect a possible strain change prior to a hazardous earthquake in regions where conventional borehole strain meter is not available.
A quantitative, comprehensive analytical model for ``fast'' magnetic reconnection in Hall MHD
NASA Astrophysics Data System (ADS)
Simakov, Andrei N.
2008-11-01
Magnetic reconnection in nature usually happens on fast (e.g. dissipation independent) time scales. While such scales have been observed computationally [1], a fundamental analytical model capable of explaining them has been lacking. Here, we propose such a quantitative model for 2D Hall MHD reconnection without a guide field. The model recovers the Sweet-Parker and the electron MHD [2] results in the appropriate limits of the ion inertial length, di, and is valid everywhere in between [3]. The model predicts the dissipation region aspect ratio and the reconnection rate Ez in terms of dissipation and inertial parameters, and has been found to be in excellent agreement with non-linear simulations. It confirms a number of long-standing empirical results and resolves several controversies. In particular, we find that both open X-point and elongated dissipation regions allow ``fast'' reconnection and that Ez depends on di. Moreover, when applied to electron-positron plasmas, the model demonstrates that fast dispersive waves are not instrumental for ``fast'' reconnection [4]. [1] J. Birn et al., J. Geophys. Res. 106, 3715 (2001). [2] L. Chac'on, A. N. Simakov, and A. Zocco, Phys. Rev. Lett. 99, 235001 (2007). [3] A. N. Simakov and L. Chac'on, submitted to Phys. Rev. Lett. [4] L. Chac'on, A. N. Simakov, V. Lukin, and A. Zocco, Phys. Rev. Lett. 101, 025003 (2008).
Resource Letter MPCVW-1: Modeling Political Conflict, Violence, and Wars: A Survey
NASA Astrophysics Data System (ADS)
Morgenstern, Ana P.; Velásquez, Nicolás; Manrique, Pedro; Hong, Qi; Johnson, Nicholas; Johnson, Neil
2013-11-01
This Resource Letter provides a guide into the literature on modeling and explaining political conflict, violence, and wars. Although this literature is dominated by social scientists, multidisciplinary work is currently being developed in the wake of myriad methodological approaches that have sought to analyze and predict political violence. The works covered herein present an overview of this abundance of methodological approaches. Since there is a variety of possible data sets and theoretical approaches, the level of detail and scope of models can vary quite considerably. The review does not provide a summary of the available data sets, but instead highlights recent works on quantitative or multi-method approaches to modeling different forms of political violence. Journal articles and books are organized in the following topics: social movements, diffusion of social movements, political violence, insurgencies and terrorism, and civil wars.
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, J. A.
1997-01-01
We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.
Tang, Min; Zhao, Rui; van de Velde, Helgi; Tross, Jennifer G; Mitsiades, Constantine; Viselli, Suzanne; Neuwirth, Rachel; Esseltine, Dixie-Lee; Anderson, Kenneth; Ghobrial, Irene M; San Miguel, Jesús F; Richardson, Paul G; Tomasson, Michael H; Michor, Franziska
2016-08-15
Since the pioneering work of Salmon and Durie, quantitative measures of tumor burden in multiple myeloma have been used to make clinical predictions and model tumor growth. However, such quantitative analyses have not yet been performed on large datasets from trials using modern chemotherapy regimens. We analyzed a large set of tumor response data from three randomized controlled trials of bortezomib-based chemotherapy regimens (total sample size n = 1,469 patients) to establish and validate a novel mathematical model of multiple myeloma cell dynamics. Treatment dynamics in newly diagnosed patients were most consistent with a model postulating two tumor cell subpopulations, "progenitor cells" and "differentiated cells." Differential treatment responses were observed with significant tumoricidal effects on differentiated cells and less clear effects on progenitor cells. We validated this model using a second trial of newly diagnosed patients and a third trial of refractory patients. When applying our model to data of relapsed patients, we found that a hybrid model incorporating both a differentiation hierarchy and clonal evolution best explains the response patterns. The clinical data, together with mathematical modeling, suggest that bortezomib-based therapy exerts a selection pressure on myeloma cells that can shape the disease phenotype, thereby generating further inter-patient variability. This model may be a useful tool for improving our understanding of disease biology and the response to chemotherapy regimens. Clin Cancer Res; 22(16); 4206-14. ©2016 AACR. ©2016 American Association for Cancer Research.
Influence of pore pressure change on coseismic volumetric strain
Wang, Chi-Yuen; Barbour, Andrew J.
2017-01-01
Coseismic strain is fundamentally important for understanding crustal response to changes of stress after earthquakes. The elastic dislocation model has been widely applied to interpreting observed shear deformation caused by earthquakes. The application of the same theory to interpreting volumetric strain, however, has met with difficulty, especially in the far field of earthquakes. Predicted volumetric strain with dislocation model often differs substantially, and sometimes of opposite signs, from observed coseismic volumetric strains. The disagreement suggests that some processes unaccounted for by the dislocation model may occur during earthquakes. Several hypotheses have been suggested, but none have been tested quantitatively. In this paper we first examine published data to highlight the difference between the measured and calculated static coseismic volumetric strains; we then use these data to provide quantitative test of the model that the disagreement may be explained by the change of pore pressure in the shallow crust. The test allows us to conclude that coseismic change of pore pressure may be an important mechanism for coseismic crustal strain and, in the far field, may even be the dominant mechanism. Thus in the interpretation of observed coseismic crustal strain, one needs to account not only for the elastic strain due to fault rupture but also for the strain due to coseismic change of pore pressure.
Models of Jovian decametric radiation. [astronomical models of decametric waves
NASA Technical Reports Server (NTRS)
Smith, R. A.
1975-01-01
A critical review is presented of theoretical models of Jovian decametric radiation, with particular emphasis on the Io-modulated emission. The problem is divided into three broad aspects: (1) the mechanism coupling Io's orbital motion to the inner exosphere, (2) the consequent instability mechanism by which electromagnetic waves are amplified, and (3) the subsequent propagation of the waves in the source region and the Jovian plasmasphere. At present there exists no comprehensive theory that treats all of these aspects quantitatively within a single framework. Acceleration of particles by plasma sheaths near Io is proposed as an explanation for the coupling mechanism, while most of the properties of the emission may be explained in the context of cyclotron instability of a highly anisotropic distribution of streaming particles.
[The role of the Aedes aegypti vector in the epidemiology of dengue in Mexico].
Fernández-Salas, I; Flores-Leal, A
1995-01-01
The role of Aedes aegypti (Lineo) in the epidemiology of dengue fever in Mexico is herein discussed based on the vectorial capacity model. Comments on the advantages and disadvantages of each model component at the time of field determinations are also presented. Emphasis is made on the impact of sampling and method bias on the results of vectorial capacity studies. The paper also addresses the need to increase vector biology knowledge as an input for epidemiological work to explain and predict dengue fever outbreaks. Comments on potential entomological variables not considered by the quantitative model are included. Finally, we elaborate on the introduction of Aedes albopictus (Skuse) in Mexico as a new risk factor and on its implications for the understanding of dengue fever transmission in Mexico.
HPLC study on the 'history' dependence of gramicidin A conformation in phospholipid model membranes.
Bañó, M C; Braco, L; Abad, C
1989-06-19
A novel HPLC methodology for the study of gramicidin A reconstituted in model membranes has been tested in comparison with circular dichroism data. It is shown that this chromatographic technique not only corroborates most of the recent spectroscopic results but allows one to explain them in terms of mass fractions of different actual conformational species of GA in the phospholipid assemblies. In particular, the dependence of the inserted peptide configuration on the organic solvent and other parameters involved in the 'history' of the sample preparation and handling has been analyzed by HPLC in two phospholipid model systems: small unilamellar vesicles and micelles. Moreover, a slow conformational transition of GA towards a beta 6.3-helical configuration, accelerated by heat incubation, has been also chromatographically visualized and quantitatively interpreted.
Cheaib, Alissar; Badeau, Vincent; Boe, Julien; Chuine, Isabelle; Delire, Christine; Dufrêne, Eric; François, Christophe; Gritti, Emmanuel S; Legay, Myriam; Pagé, Christian; Thuiller, Wilfried; Viovy, Nicolas; Leadley, Paul
2012-06-01
Model-based projections of shifts in tree species range due to climate change are becoming an important decision support tool for forest management. However, poorly evaluated sources of uncertainty require more scrutiny before relying heavily on models for decision-making. We evaluated uncertainty arising from differences in model formulations of tree response to climate change based on a rigorous intercomparison of projections of tree distributions in France. We compared eight models ranging from niche-based to process-based models. On average, models project large range contractions of temperate tree species in lowlands due to climate change. There was substantial disagreement between models for temperate broadleaf deciduous tree species, but differences in the capacity of models to account for rising CO(2) impacts explained much of the disagreement. There was good quantitative agreement among models concerning the range contractions for Scots pine. For the dominant Mediterranean tree species, Holm oak, all models foresee substantial range expansion. © 2012 Blackwell Publishing Ltd/CNRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Anomalous contact angle hysteresis of a captive bubble: advancing contact line pinning.
Hong, Siang-Jie; Chang, Feng-Ming; Chou, Tung-He; Chan, Seong Heng; Sheng, Yu-Jane; Tsao, Heng-Kwong
2011-06-07
Contact angle hysteresis of a sessile drop on a substrate consists of continuous invasion of liquid phase with the advancing angle (θ(a)) and contact line pinning of liquid phase retreat until the receding angle (θ(r)) is reached. Receding pinning is generally attributed to localized defects that are more wettable than the rest of the surface. However, the defect model cannot explain advancing pinning of liquid phase invasion driven by a deflating bubble and continuous retreat of liquid phase driven by the inflating bubble. A simple thermodynamic model based on adhesion hysteresis is proposed to explain anomalous contact angle hysteresis of a captive bubble quantitatively. The adhesion model involves two solid–liquid interfacial tensions (γ(sl) > γ(sl)′). Young’s equation with γ(sl) gives the advancing angle θ(a) while that with γ(sl)′ due to surface rearrangement yields the receding angle θ(r). Our analytical analysis indicates that contact line pinning represents frustration in surface free energy, and the equilibrium shape corresponds to a nondifferential minimum instead of a local minimum. On the basis of our thermodynamic model, Surface Evolver simulations are performed to reproduce both advancing and receding behavior associated with a captive bubble on the acrylic glass.
ERIC Educational Resources Information Center
Rodriguez-Falces, Javier
2013-01-01
In electrophysiology studies, it is becoming increasingly common to explain experimental observations using both descriptive methods and quantitative approaches. However, some electrophysiological phenomena, such as the generation of extracellular potentials that results from the propagation of the excitation source along the muscle fiber, are…
Understanding the heavy-tailed dynamics in human behavior
NASA Astrophysics Data System (ADS)
Ross, Gordon J.; Jones, Tim
2015-06-01
The recent availability of electronic data sets containing large volumes of communication data has made it possible to study human behavior on a larger scale than ever before. From this, it has been discovered that across a diverse range of data sets, the interevent times between consecutive communication events obey heavy-tailed power law dynamics. Explaining this has proved controversial, and two distinct hypotheses have emerged. The first holds that these power laws are fundamental, and arise from the mechanisms such as priority queuing that humans use to schedule tasks. The second holds that they are statistical artifacts which only occur in aggregated data when features such as circadian rhythms and burstiness are ignored. We use a large social media data set to test these hypotheses, and find that although models that incorporate circadian rhythms and burstiness do explain part of the observed heavy tails, there is residual unexplained heavy-tail behavior which suggests a more fundamental cause. Based on this, we develop a quantitative model of human behavior which improves on existing approaches and gives insight into the mechanisms underlying human interactions.
Trace element evaluation of a suite of rocks from Reunion Island, Indian Ocean
Zielinski, R.A.
1975-01-01
Reunion Island consists of an olivine-basalt shield capped by a series of flows and intrusives ranging from hawaiite through trachyte. Eleven rocks representing the total compositional sequence have been analyzed for U, Th and REE. Eight of the rocks (group 1) have positive-slope, parallel, chondrite-normalized REE fractionation patterns. Using a computer model, the major element compositions of group 1 whole rocks and observed phenocrysts were used to predict the crystallization histories of increasingly residual liquids, and allowed semi-quantitative verification of origin by fractional crystallization of the olivine-basalt parent magma. Results were combined with mineral-liquid distribution coefficient data to predict trace element abundances, and existing data on Cr, Ni, Sr and Ba were also successfully incorporated in the model. The remaining three rocks (group 2) have nonuniform positive-slope REE fractionation patterns not parallel to group 1 patterns. Rare earth fractionation in a syenite is explained by partial melting of a source rich in clinopyroxene and/or hornblende. The other two rocks of group 2 are explained as hybrids resulting from mixing of syenite and magmas of group 1. ?? 1975.
The memory remains: Understanding collective memory in the digital age
García-Gavilanes, Ruth; Mollgaard, Anders; Tsvetkova, Milena; Yasseri, Taha
2017-01-01
Recently developed information communication technologies, particularly the Internet, have affected how we, both as individuals and as a society, create, store, and recall information. The Internet also provides us with a great opportunity to study memory using transactional large-scale data in a quantitative framework similar to the practice in natural sciences. We make use of online data by analyzing viewership statistics of Wikipedia articles on aircraft crashes. We study the relation between recent events and past events and particularly focus on understanding memory-triggering patterns. We devise a quantitative model that explains the flow of viewership from a current event to past events based on similarity in time, geography, topic, and the hyperlink structure of Wikipedia articles. We show that, on average, the secondary flow of attention to past events generated by these remembering processes is larger than the primary attention flow to the current event. We report these previously unknown cascading effects. PMID:28435881
Mechanism on brain information processing: Energy coding
NASA Astrophysics Data System (ADS)
Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa
2006-09-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.
Vocal development in a Waddington landscape
Teramoto, Yayoi; Takahashi, Daniel Y; Holmes, Philip; Ghazanfar, Asif A
2017-01-01
Vocal development is the adaptive coordination of the vocal apparatus, muscles, the nervous system, and social interaction. Here, we use a quantitative framework based on optimal control theory and Waddington’s landscape metaphor to provide an integrated view of this process. With a biomechanical model of the marmoset monkey vocal apparatus and behavioral developmental data, we show that only the combination of the developing vocal tract, vocal apparatus muscles and nervous system can fully account for the patterns of vocal development. Together, these elements influence the shape of the monkeys’ vocal developmental landscape, tilting, rotating or shifting it in different ways. We can thus use this framework to make quantitative predictions regarding how interfering factors or experimental perturbations can change the landscape within a species, or to explain comparative differences in vocal development across species DOI: http://dx.doi.org/10.7554/eLife.20782.001 PMID:28092262
The memory remains: Understanding collective memory in the digital age.
García-Gavilanes, Ruth; Mollgaard, Anders; Tsvetkova, Milena; Yasseri, Taha
2017-04-01
Recently developed information communication technologies, particularly the Internet, have affected how we, both as individuals and as a society, create, store, and recall information. The Internet also provides us with a great opportunity to study memory using transactional large-scale data in a quantitative framework similar to the practice in natural sciences. We make use of online data by analyzing viewership statistics of Wikipedia articles on aircraft crashes. We study the relation between recent events and past events and particularly focus on understanding memory-triggering patterns. We devise a quantitative model that explains the flow of viewership from a current event to past events based on similarity in time, geography, topic, and the hyperlink structure of Wikipedia articles. We show that, on average, the secondary flow of attention to past events generated by these remembering processes is larger than the primary attention flow to the current event. We report these previously unknown cascading effects.
Evidence for ice-ocean albedo feedback in the Arctic Ocean shifting to a seasonal ice zone.
Kashiwase, Haruhiko; Ohshima, Kay I; Nihashi, Sohey; Eicken, Hajo
2017-08-15
Ice-albedo feedback due to the albedo contrast between water and ice is a major factor in seasonal sea ice retreat, and has received increasing attention with the Arctic Ocean shifting to a seasonal ice cover. However, quantitative evaluation of such feedbacks is still insufficient. Here we provide quantitative evidence that heat input through the open water fraction is the primary driver of seasonal and interannual variations in Arctic sea ice retreat. Analyses of satellite data (1979-2014) and a simplified ice-upper ocean coupled model reveal that divergent ice motion in the early melt season triggers large-scale feedback which subsequently amplifies summer sea ice anomalies. The magnitude of divergence controlling the feedback has doubled since 2000 due to a more mobile ice cover, which can partly explain the recent drastic ice reduction in the Arctic Ocean.
Variables affecting learning in a simulation experience: a mixed methods study.
Beischel, Kelly P
2013-02-01
The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.
Educational Access in India. Country Policy Brief
ERIC Educational Resources Information Center
Online Submission, 2009
2009-01-01
This Policy Brief describes and explains patterns of access to schools in India. It outlines policy and legislation on access to education and provides an analysis of access, vulnerability and exclusion. The quantitative data is supported by a review of research which explains the patterns of access and exclusion. It is based on findings from the…
The complex dynamics of products and its asymptotic properties
Cristelli, Matthieu; Zaccaria, Andrea; Pietronero, Luciano
2017-01-01
We analyse global export data within the Economic Complexity framework. We couple the new economic dimension Complexity, which captures how sophisticated products are, with an index called logPRODY, a measure of the income of the respective exporters. Products’ aggregate motion is treated as a 2-dimensional dynamical system in the Complexity-logPRODY plane. We find that this motion can be explained by a quantitative model involving the competition on the markets, that can be mapped as a scalar field on the Complexity-logPRODY plane and acts in a way akin to a potential. This explains the movement of products towards areas of the plane in which the competition is higher. We analyse market composition in more detail, finding that for most products it tends, over time, to a characteristic configuration, which depends on the Complexity of the products. This market configuration, which we called asymptotic, is characterized by higher levels of competition. PMID:28520794
'Individualism-collectivism' as an explanatory device for mental illness stigma.
Papadopoulos, Chris; Foster, John; Caldwell, Kay
2013-06-01
The aim of this study is investigate whether the cross-cultural value paradigm 'individualism-collectivism' is a useful explanatory model for mental illness stigma on a cultural level. Using snowball sampling, a quantitative questionnaire survey of 305 individuals from four UK-based cultural groups (white-English, American, Greek/Greek Cypriot, and Chinese) was carried out. The questionnaire included the 'Community Attitudes to Mental Illness scale' and the 'vertical-horizontal individualism-collectivism scale'. The results revealed that the more stigmatizing a culture's mental illness attitudes are, the more likely collectivism effectively explains these attitudes. In contrast, the more positive a culture's mental illness attitudes, the more likely individualism effectively explains attitudes. We conclude that a consideration of the individualism-collectivism paradigm should be included in any future research aiming to provide a holistic understanding of the causes of mental illness stigma, particularly when the cultures stigmatization levels are particularly high or low.
A data-driven model for influenza transmission incorporating media effects.
Mitchell, Lewis; Ross, Joshua V
2016-10-01
Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.
South, Susan C.; Hamdi, Nayla; Krueger, Robert F.
2015-01-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative gene × environment interaction (G×E) models not only have the potential to elucidate when genetic and environmental influences on a phenotype might differ, but why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology—diathesis-stress, bioecological, differential susceptibility, and social control. In the current manuscript, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically-informative plausible accounts of how phenotypes related to social inequality—physical health and cognition—might relate to these theoretical models. PMID:26426103
South, Susan C; Hamdi, Nayla R; Krueger, Robert F
2017-02-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative Gene × Environment interaction models have the potential to elucidate not only when genetic and environmental influences on a phenotype might differ, but also why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology-diathesis-stress, bioecological, differential susceptibility, and social control. In the current article, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically informative plausible accounts of how phenotypes related to social inequality-physical health and cognition-might relate to these theoretical models. © 2015 Wiley Periodicals, Inc.
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Spatiotemporal Evolution of Erythema Migrans, the Hallmark Rash of Lyme Disease
Vig, Dhruv K.; Wolgemuth, Charles W.
2014-01-01
To elucidate pathogen-host interactions during early Lyme disease, we developed a mathematical model that explains the spatiotemporal dynamics of the characteristic first sign of the disease, a large (≥5-cm diameter) rash, known as an erythema migrans. The model predicts that the bacterial replication and dissemination rates are the primary factors controlling the speed that the rash spreads, whereas the rate that active macrophages are cleared from the dermis is the principle determinant of rash morphology. In addition, the model supports the clinical observations that antibiotic treatment quickly clears spirochetes from the dermis and that the rash appearance is not indicative of the efficacy of the treatment. The quantitative agreement between our results and clinical data suggest that this model could be used to develop more efficient drug treatments and may form a basis for modeling pathogen-host interactions in other emerging infectious diseases. PMID:24507617
Achour, Brahim; Dantonio, Alyssa; Niosi, Mark; Novak, Jonathan J; Fallon, John K; Barber, Jill; Smith, Philip C; Rostami-Hodjegan, Amin; Goosen, Theunis C
2017-10-01
Quantitative characterization of UDP-glucuronosyltransferase (UGT) enzymes is valuable in glucuronidation reaction phenotyping, predicting metabolic clearance and drug-drug interactions using extrapolation exercises based on pharmacokinetic modeling. Different quantitative proteomic workflows have been employed to quantify UGT enzymes in various systems, with reports indicating large variability in expression, which cannot be explained by interindividual variability alone. To evaluate the effect of methodological differences on end-point UGT abundance quantification, eight UGT enzymes were quantified in 24 matched liver microsomal samples by two laboratories using stable isotope-labeled (SIL) peptides or quantitative concatemer (QconCAT) standard, and measurements were assessed against catalytic activity in seven enzymes ( n = 59). There was little agreement between individual abundance levels reported by the two methods; only UGT1A1 showed strong correlation [Spearman rank order correlation (Rs) = 0.73, P < 0.0001; R 2 = 0.30; n = 24]. SIL-based abundance measurements correlated well with enzyme activities, with correlations ranging from moderate for UGTs 1A6, 1A9, and 2B15 (Rs = 0.52-0.59, P < 0.0001; R 2 = 0.34-0.58; n = 59) to strong correlations for UGTs 1A1, 1A3, 1A4, and 2B7 (Rs = 0.79-0.90, P < 0.0001; R 2 = 0.69-0.79). QconCAT-based data revealed generally poor correlation with activity, whereas moderate correlations were shown for UGTs 1A1, 1A3, and 2B7. Spurious abundance-activity correlations were identified in the cases of UGT1A4/2B4 and UGT2B7/2B15, which could be explained by correlations of protein expression between these enzymes. Consistent correlation of UGT abundance with catalytic activity, demonstrated by the SIL-based dataset, suggests that quantitative proteomic data should be validated against catalytic activity whenever possible. In addition, metabolic reaction phenotyping exercises should consider spurious abundance-activity correlations to avoid misleading conclusions. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
Evaluation of an ensemble of genetic models for prediction of a quantitative trait.
Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola
2014-01-01
Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.
Discrete-Slots Models of Visual Working-Memory Response Times
Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.
2014-01-01
Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956
Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André
2011-01-01
Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.
Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data
McKenzie, D.; Peterson, D.L.; Agee, James K.
2000-01-01
Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.
Modeling Dynamic Helium Release as a Tracer of Rock Deformation
Gardner, W. Payton; Bauer, Stephen J.; Kuhlman, Kristopher L.; ...
2017-11-03
Here, we use helium released during mechanical deformation of shales as a signal to explore the effects of deformation and failure on material transport properties. A dynamic dual-permeability model with evolving pore and fracture networks is used to simulate gases released from shale during deformation and failure. Changes in material properties required to reproduce experimentally observed gas signals are explored. We model two different experiments of 4He flow rate measured from shale undergoing mechanical deformation, a core parallel to bedding and a core perpendicular to bedding. We also found that the helium signal is sensitive to fracture development and evolutionmore » as well as changes in the matrix transport properties. We constrain the timing and effective fracture aperture, as well as the increase in matrix porosity and permeability. Increases in matrix permeability are required to explain gas flow prior to macroscopic failure, and the short-term gas flow postfailure. Increased matrix porosity is required to match the long-term, postfailure gas flow. This model provides the first quantitative interpretation of helium release as a result of mechanical deformation. The sensitivity of this model to changes in the fracture network, as well as to matrix properties during deformation, indicates that helium release can be used as a quantitative tool to evaluate the state of stress and strain in earth materials.« less
Carbon nanotubes exhibit fibrillar pharmacology in primates
Alidori, Simone; Thorek, Daniel L. J.; Beattie, Bradley J.; ...
2017-08-28
Nanomedicine rests at the nexus of medicine, bioengineering, and biology with great potential for improving health through innovation and development of new drugs and devices. Carbon nanotubes are an example of a fibrillar nanomaterial poised to translate into medical practice. The leading candidate material in this class is ammonium-functionalized carbon nanotubes (fCNT) that exhibits unexpected pharmacological behavior in vivo with important biotechnology applications. Here, we provide a multi-organ evaluation of the distribution, uptake and processing of fCNT in nonhuman primates using quantitative whole body positron emission tomography (PET), compartmental modeling of pharmacokinetic data, serum biomarkers and ex vivo pathology investigation.more » Kidney and liver are the two major organ systems that accumulate and excrete [ 86Y]fCNT in nonhuman primates and accumulation is cell specific as described by compartmental modeling analyses of the quantitative PET data. A serial two-compartment model explains renal processing of tracer-labeled fCNT; hepatic data fits a parallel two-compartment model. These modeling data also reveal significant elimination of the injected activity (>99.8%) from the primate within 3 days (t 1/2 = 11.9 hours). Thus, these favorable results in nonhuman primates provide important insight to the fate of fCNT in vivo and pave the way to further engineering design considerations for sophisticated nanomedicines to aid late stage development and clinical use in man.« less
Carbon nanotubes exhibit fibrillar pharmacology in primates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alidori, Simone; Thorek, Daniel L. J.; Beattie, Bradley J.
Nanomedicine rests at the nexus of medicine, bioengineering, and biology with great potential for improving health through innovation and development of new drugs and devices. Carbon nanotubes are an example of a fibrillar nanomaterial poised to translate into medical practice. The leading candidate material in this class is ammonium-functionalized carbon nanotubes (fCNT) that exhibits unexpected pharmacological behavior in vivo with important biotechnology applications. Here, we provide a multi-organ evaluation of the distribution, uptake and processing of fCNT in nonhuman primates using quantitative whole body positron emission tomography (PET), compartmental modeling of pharmacokinetic data, serum biomarkers and ex vivo pathology investigation.more » Kidney and liver are the two major organ systems that accumulate and excrete [ 86Y]fCNT in nonhuman primates and accumulation is cell specific as described by compartmental modeling analyses of the quantitative PET data. A serial two-compartment model explains renal processing of tracer-labeled fCNT; hepatic data fits a parallel two-compartment model. These modeling data also reveal significant elimination of the injected activity (>99.8%) from the primate within 3 days (t 1/2 = 11.9 hours). Thus, these favorable results in nonhuman primates provide important insight to the fate of fCNT in vivo and pave the way to further engineering design considerations for sophisticated nanomedicines to aid late stage development and clinical use in man.« less
Modeling Dynamic Helium Release as a Tracer of Rock Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Bauer, Stephen J.; Kuhlman, Kristopher L.
Here, we use helium released during mechanical deformation of shales as a signal to explore the effects of deformation and failure on material transport properties. A dynamic dual-permeability model with evolving pore and fracture networks is used to simulate gases released from shale during deformation and failure. Changes in material properties required to reproduce experimentally observed gas signals are explored. We model two different experiments of 4He flow rate measured from shale undergoing mechanical deformation, a core parallel to bedding and a core perpendicular to bedding. We also found that the helium signal is sensitive to fracture development and evolutionmore » as well as changes in the matrix transport properties. We constrain the timing and effective fracture aperture, as well as the increase in matrix porosity and permeability. Increases in matrix permeability are required to explain gas flow prior to macroscopic failure, and the short-term gas flow postfailure. Increased matrix porosity is required to match the long-term, postfailure gas flow. This model provides the first quantitative interpretation of helium release as a result of mechanical deformation. The sensitivity of this model to changes in the fracture network, as well as to matrix properties during deformation, indicates that helium release can be used as a quantitative tool to evaluate the state of stress and strain in earth materials.« less
Quantitative experiments to explain the change of seasons
NASA Astrophysics Data System (ADS)
Testa, Italo; Busarello, Gianni; Puddu, Emanuella; Leccia, Silvio; Merluzzi, Paola; Colantonio, Arturo; Moretti, Maria Ida; Galano, Silvia; Zappia, Alessandro
2015-03-01
The science education literature shows that students have difficulty understanding what causes the seasons. Incorrect explanations are often due to a lack of knowledge about the physical mechanisms underlying this phenomenon. To address this, we present a module in which the students engage in quantitative measurements with a photovoltaic panel to explain changes to the sunray flow on Earth’s surface over the year. The activities also provide examples of energy transfers between the incoming radiation and the environment to introduce basic features of Earth’s climate. The module was evaluated with 45 secondary school students (aged 17-18) and a pre-/post-test research design. Analysis of students’ learning outcomes supports the effectiveness of the proposed activities.
A sampling model of social judgment.
Galesic, Mirta; Olsson, Henrik; Rieskamp, Jörg
2018-04-01
Studies of social judgments have demonstrated a number of diverse phenomena that were so far difficult to explain within a single theoretical framework. Prominent examples are false consensus and false uniqueness, as well as self-enhancement and self-depreciation. Here we show that these seemingly complex phenomena can be a product of an interplay between basic cognitive processes and the structure of social and task environments. We propose and test a new process model of social judgment, the social sampling model (SSM), which provides a parsimonious quantitative account of different types of social judgments. In the SSM, judgments about characteristics of broader social environments are based on sampling of social instances from memory, where instances receive activation if they belong to a target reference class and have a particular characteristic. These sampling processes interact with the properties of social and task environments, including homophily, shapes of frequency distributions, and question formats. For example, in line with the model's predictions we found that whether false consensus or false uniqueness will occur depends on the level of homophily in people's social circles and on the way questions are asked. The model also explains some previously unaccounted-for patterns of self-enhancement and self-depreciation. People seem to be well informed about many characteristics of their immediate social circles, which in turn influence how they evaluate broader social environments and their position within them. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mechanism of hologram formation in fixation-free rehalogenating bleaching processes.
Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto
2002-07-10
The mechanism of hologram formation in fixation-free rehalogenating bleaching processes have been treated by different authors. The experiments carried out on Agfa 8E75 HD plates led to the conclusion that material transfer from the exposed to the unexposed zones is the main mechanism under theprocess. We present a simple model that explains the mechanism of hologram formation inside the emulsion. Also quantitative data obtained using both Agfa 8E75 HD and Slavich PFG-01 fine-grained red-sensitive emulsions are given and good agreement between theory and experiments are found.
The genetic architecture of maize height.
Peiffer, Jason A; Romay, Maria C; Gore, Michael A; Flint-Garcia, Sherry A; Zhang, Zhiwu; Millard, Mark J; Gardner, Candice A C; McMullen, Michael D; Holland, James B; Bradbury, Peter J; Buckler, Edward S
2014-04-01
Height is one of the most heritable and easily measured traits in maize (Zea mays L.). Given a pedigree or estimates of the genomic identity-by-state among related plants, height is also accurately predictable. But, mapping alleles explaining natural variation in maize height remains a formidable challenge. To address this challenge, we measured the plant height, ear height, flowering time, and node counts of plants grown in >64,500 plots across 13 environments. These plots contained >7300 inbreds representing most publically available maize inbreds in the United States and families of the maize Nested Association Mapping (NAM) panel. Joint-linkage mapping of quantitative trait loci (QTL), fine mapping in near isogenic lines (NILs), genome-wide association studies (GWAS), and genomic best linear unbiased prediction (GBLUP) were performed. The heritability of maize height was estimated to be >90%. Mapping NAM family-nested QTL revealed the largest explained 2.1 ± 0.9% of height variation. The effects of two tropical alleles at this QTL were independently validated by fine mapping in NIL families. Several significant associations found by GWAS colocalized with established height loci, including brassinosteroid-deficient dwarf1, dwarf plant1, and semi-dwarf2. GBLUP explained >80% of height variation in the panels and outperformed bootstrap aggregation of family-nested QTL models in evaluations of prediction accuracy. These results revealed maize height was under strong genetic control and had a highly polygenic genetic architecture. They also showed that multiple models of genetic architecture differing in polygenicity and effect sizes can plausibly explain a population's variation in maize height, but they may vary in predictive efficacy.
The Limitations of Quantitative Social Science for Informing Public Policy
ERIC Educational Resources Information Center
Jerrim, John; de Vries, Robert
2017-01-01
Quantitative social science (QSS) has the potential to make an important contribution to public policy. However it also has a number of limitations. The aim of this paper is to explain these limitations to a non-specialist audience and to identify a number of ways in which QSS research could be improved to better inform public policy.
Control of Mechanotransduction by Molecular Clutch Dynamics.
Elosegui-Artola, Alberto; Trepat, Xavier; Roca-Cusachs, Pere
2018-05-01
The linkage of cells to their microenvironment is mediated by a series of bonds that dynamically engage and disengage, in what has been conceptualized as the molecular clutch model. Whereas this model has long been employed to describe actin cytoskeleton and cell migration dynamics, it has recently been proposed to also explain mechanotransduction (i.e., the process by which cells convert mechanical signals from their environment into biochemical signals). Here we review the current understanding on how cell dynamics and mechanotransduction are driven by molecular clutch dynamics and its master regulator, the force loading rate. Throughout this Review, we place a specific emphasis on the quantitative prediction of cell response enabled by combined experimental and theoretical approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Expert Game experiment predicts emergence of trust in professional communication networks.
Bendtsen, Kristian Moss; Uekermann, Florian; Haerter, Jan O
2016-10-25
Strong social capital is increasingly recognized as an organizational advantage. Better knowledge sharing and reduced transaction costs increase work efficiency. To mimic the formation of the associated communication network, we propose the Expert Game, where each individual must find a specific expert and receive her help. Participants act in an impersonal environment and under time constraints that provide short-term incentives for noncooperative behavior. Despite these constraints, we observe cooperation between individuals and the self-organization of a sustained trust network, which facilitates efficient communication channels with increased information flow. We build a behavioral model that explains the experimental dynamics. Analysis of the model reveals an exploitation protection mechanism and measurable social capital, which quantitatively describe the economic utility of trust.
Spinal decompression sickness: mechanical studies and a model.
Hills, B A; James, P B
1982-09-01
Six experimental investigations of various mechanical aspects of the spinal cord are described relevant to its injury by gas deposited from solution by decompression. These show appreciable resistances to gas pockets dissipating by tracking along tissue boundaries or distending tissue, the back pressure often exceeding the probable blood perfusion pressure--particularly in the watershed zones. This leads to a simple mechanical model of spinal decompression sickness based on the vascular "waterfall" that is consistent with the pathology, the major quantitative aspects, and the symptomatology--especially the reversibility with recompression that is so difficult to explain by an embolic mechanism. The hypothesis is that autochthonous gas separating from solution in the spinal cord can reach sufficient local pressure to exceed the perfusion pressure and thus occlude blood flow.
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Yusof, Muhammad Mat
2016-08-01
This study examines the user perception of usefulness, ease of use and enjoyment as drivers for the users' complex interaction with map on mobile devices. TAM model was used to evaluate users' intention to use and their acceptance of interactive mobile map using the above three beliefs as antecedents. Quantitative research (survey) methodology was employed and the analysis and findings showed that all the three explanatory variables used in this study, explain the variability in the user acceptance of interactive mobile map technology. Perceived usefulness, perceived ease of use, and perceived enjoyment each have significant positive influence on user acceptance of interactive mobile maps. This study further validates the TAM model.
Subduction-zone magnetic anomalies and implications for hydrated forearc mantle
Blakely, R.J.; Brocher, T.M.; Wells, R.E.
2005-01-01
Continental mantle in subduction zones is hydrated by release of water from the underlying oceanic plate. Magnetite is a significant byproduct of mantle hydration, and forearc mantle, cooled by subduction, should contribute to long-wavelength magnetic anomalies above subduction zones. We test this hypothesis with a quantitative model of the Cascadia convergent margin, based on gravity and aeromagnetic anomalies and constrained by seismic velocities, and find that hydrated mantle explains an important disparity in potential-field anomalies of Cascadia. A comparison with aeromagnetic data, thermal models, and earthquakes of Cascadia, Japan, and southern Alaska suggests that magnetic mantle may be common in forearc settings and thus magnetic anomalies may be useful in mapping hydrated mantle in convergent margins worldwide. ?? 2005 Geological Society of America.
ERIC Educational Resources Information Center
Devroye, Dan; Freeman, Richard
The question of whether inequality in skills explains inequality of earnings across advanced countries was examined through a review of data from the International Adult Literacy Survey (IALS), which examined the prose, document, and quantitative literacy skills of adults in 12 member countries of the Organisation for Economic Cooperation and…
Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania
2013-09-01
Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data. Copyright © 2013 Elsevier Inc. All rights reserved.
A Short-Term Population Model of the Suicide Risk: The Case of Spain.
De la Poza, Elena; Jódar, Lucas
2018-06-14
A relevant proportion of deaths by suicide have been attributed to other causes that produce the number of suicides remains hidden. The existence of a hidden number of cases is explained by the nature of the problem. Problems like this involve violence, and produce fear and social shame in victims' families. The existence of violence, fear and social shame experienced by victims favours a considerable number of suicides, identified as accidents or natural deaths. This paper proposes a short time discrete compartmental mathematical model to measure the suicidal risk for the case of Spain. The compartment model classifies and quantifies the amount of the Spanish population within the age intervals (16, 78) by their degree of suicide risk and their changes over time. Intercompartmental transits are due to the combination of quantitative and qualitative factors. Results are computed and simulations are performed to analyze the sensitivity of the model under uncertain coefficients.
Possible Ceres bow shock surfaces based on fluid models
NASA Astrophysics Data System (ADS)
Jia, Y.-D.; Villarreal, M. N.; Russell, C. T.
2017-05-01
The hot electron beams that Dawn detected at Ceres can be explained by fast-Fermi acceleration at a temporary bow shock. A shock forms when the solar wind encounters a temporary atmosphere, similar to a cometary coma. We use a magnetohydrodynamic model to quantitatively reproduce the 3-D shock surface at Ceres and deduce the atmosphere characteristics that are required to create such a shock. Our most simple model requires about 1.8 kg/s, or 6 × 1025/s water vapor production rate to form such a shock. Such an estimate relies on characteristics of the solar wind-Ceres interaction. We present several case studies to show how these conditions affect our estimate. In addition, we contrast these cases with the smaller and narrower shock caused by a subsurface induction. Our multifluid model reveals the asymmetry introduced by the large gyroradius of the heavy pickup ions and further constrains the IMF direction during the events.
Understanding the optics to aid microscopy image segmentation.
Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei
2010-01-01
Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.
Crossover Patterning by the Beam-Film Model: Analysis and Implications
Zhang, Liangran; Liang, Zhangyi; Hutchinson, John; Kleckner, Nancy
2014-01-01
Crossing-over is a central feature of meiosis. Meiotic crossover (CO) sites are spatially patterned along chromosomes. CO-designation at one position disfavors subsequent CO-designation(s) nearby, as described by the classical phenomenon of CO interference. If multiple designations occur, COs tend to be evenly spaced. We have previously proposed a mechanical model by which CO patterning could occur. The central feature of a mechanical mechanism is that communication along the chromosomes, as required for CO interference, can occur by redistribution of mechanical stress. Here we further explore the nature of the beam-film model, its ability to quantitatively explain CO patterns in detail in several organisms, and its implications for three important patterning-related phenomena: CO homeostasis, the fact that the level of zero-CO bivalents can be low (the “obligatory CO”), and the occurrence of non-interfering COs. Relationships to other models are discussed. PMID:24497834
NASA Technical Reports Server (NTRS)
Singh, Rajendra; Lim, Teik Chin
1989-01-01
A mathematical model is proposed to examine the vibration transmission through rolling element bearings in geared rotor systems. Current bearing models, based on either ideal boundary conditions for the shaft or purely translational stiffness element description, cannot explain how the vibratory motion may be transmitted from the rotating shaft to the casing. This study clarifies this issue qualitatively and quantitatively by developing a comprehensive bearing stiffness matrix of dimension 6 model for the precision rolling element bearings from basic principles. The proposed bearing formulation is extended to analyze the overall geared rotor system dynamics including casing and mounts. The bearing stiffness matrix is included in discrete system models using lumped parameter and/or dynamic finite element techniques. Eigensolution and forced harmonic response due to rotating mass unbalance or kinematic transmission error excitation for a number of examples are computed.
Ryan, Gillian L; Watanabe, Naoki; Vavylonis, Dimitrios
2012-04-01
A characteristic feature of motile cells as they undergo a change in motile behavior is the development of fluctuating exploratory motions of the leading edge, driven by actin polymerization. We review quantitative models of these protrusion and retraction phenomena. Theoretical studies have been motivated by advances in experimental and computational methods that allow controlled perturbations, single molecule imaging, and analysis of spatiotemporal correlations in microscopic images. To explain oscillations and waves of the leading edge, most theoretical models propose nonlinear interactions and feedback mechanisms among different components of the actin cytoskeleton system. These mechanisms include curvature-sensing membrane proteins, myosin contraction, and autocatalytic biochemical reaction kinetics. We discuss how the combination of experimental studies with modeling promises to quantify the relative importance of these biochemical and biophysical processes at the leading edge and to evaluate their generality across cell types and extracellular environments. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Elez, Javier; Silva, Pablo G.; Huerta, Pedro; Perucha, M. Ángeles; Civis, Jorge; Roquero, Elvira; Rodríguez-Pascua, Miguel A.; Bardají, Teresa; Giner-Robles, Jorge L.; Martínez-Graña, Antonio
2016-12-01
The Malaga basin contains an important geological record documenting the complex paleogeographic evolution of the Gibraltar Arc before, during and after the closure and desiccation of the Mediterranean Sea triggered by the "Messinian Salinity crisis" (MSC). Proxy paleo-elevation data, estimated from the stratigraphic and geomorphological records, allow the building of quantitative paleogeoid, paleotopographic and paleogeographic models for the three main paleogeographic stages: pre-MSC (Tortonian-early Messinian), syn-MSC (late Messinian) and post-MSC (early Pliocene). The methodological workflow combines classical contouring procedures used in geology and isobase map models from geomorphometric analyses and proxy data overprinted on present Digital Terrain Models. The resulting terrain quantitative models have been arranged, managed and computed in a GIS environment. The computed terrain models enable the exploration of past landscapes usually beyond the reach of classical geomorphological analyses and strongly improve the paleogeographic and paleotopographic knowledge of the study area. The resulting models suggest the occurrence of a set of uplifted littoral erosive and paleokarstic landforms that evolved during pre-MSC times. These uplifted landform assemblages can explain the origin of key elements of the present landscape, such as the Torcal de Antequera and the large amount of mogote-like relict hills present in the zone, in terms of ancient uplifted tropical islands. The most prominent landform is the extensive erosional platform dominating the Betic frontal zone that represents the relic Atlantic wave cut platform elaborated during late-Tortonian to early Messinian times. The amount of uplift derived from paleogeoid models suggests that the area rose by about 340 m during the MSC. This points to isostatic uplift triggered by differential erosional unloading (towards the Mediterranean) as the main factor controlling landscape evolution in the area during and after the MSC. Former littoral landscapes in the old emergent axis of the Gibraltar Arc were uplifted to form the main water-divide of the present Betic Cordillera in the zone.
He, Xin; Samee, Md. Abul Hassan; Blatti, Charles; Sinha, Saurabh
2010-01-01
Quantitative models of cis-regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled, or heuristic approximations of the underlying regulatory mechanisms. We have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence, as a function of transcription factor concentrations and their DNA-binding specificities. It uses statistical thermodynamics theory to model not only protein-DNA interaction, but also the effect of DNA-bound activators and repressors on gene expression. In addition, the model incorporates mechanistic features such as synergistic effect of multiple activators, short range repression, and cooperativity in transcription factor-DNA binding, allowing us to systematically evaluate the significance of these features in the context of available expression data. Using this model on segmentation-related enhancers in Drosophila, we find that transcriptional synergy due to simultaneous action of multiple activators helps explain the data beyond what can be explained by cooperative DNA-binding alone. We find clear support for the phenomenon of short-range repression, where repressors do not directly interact with the basal transcriptional machinery. We also find that the binding sites contributing to an enhancer's function may not be conserved during evolution, and a noticeable fraction of these undergo lineage-specific changes. Our implementation of the model, called GEMSTAT, is the first publicly available program for simultaneously modeling the regulatory activities of a given set of sequences. PMID:20862354
A dynamic dual process model of risky decision making.
Diederich, Adele; Trueblood, Jennifer S
2018-03-01
Many phenomena in judgment and decision making are often attributed to the interaction of 2 systems of reasoning. Although these so-called dual process theories can explain many types of behavior, they are rarely formalized as mathematical or computational models. Rather, dual process models are typically verbal theories, which are difficult to conclusively evaluate or test. In the cases in which formal (i.e., mathematical) dual process models have been proposed, they have not been quantitatively fit to experimental data and are often silent when it comes to the timing of the 2 systems. In the current article, we present a dynamic dual process model framework of risky decision making that provides an account of the timing and interaction of the 2 systems and can explain both choice and response-time data. We outline several predictions of the model, including how changes in the timing of the 2 systems as well as time pressure can influence behavior. The framework also allows us to explore different assumptions about how preferences are constructed by the 2 systems as well as the dynamic interaction of the 2 systems. In particular, we examine 3 different possible functional forms of the 2 systems and 2 possible ways the systems can interact (simultaneously or serially). We compare these dual process models with 2 single process models using risky decision making data from Guo, Trueblood, and Diederich (2017). Using this data, we find that 1 of the dual process models significantly outperforms the other models in accounting for both choices and response times. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Do, Elizabeth K.; Prom-Wormley, Elizabeth C.; Eaves, Lindon J.; Silberg, Judy L.; Miles, Donna R.; Maes, Hermine H.
2016-01-01
Little is known regarding the underlying relationship between smoking initiation and current quantity smoked during adolescence into young adulthood. It is possible that the influences of genetic and environmental factors on this relationship vary across sex and age. To investigate this further, the current study applied a common causal contingency model to data from a Virginia-based twin study to determine: (1) if the same genetic and environmental factors are contributing to smoking initiation and current quantity smoked; (2) whether the magnitude of genetic and environmental factor contributions are the same across adolescence and young adulthood; and (3) if qualitative and quantitative differences in the sources of variance between males and females exist. Study results found no qualitative or quantitative sex differences in the relationship between smoking initiation and current quantity smoked, though relative contributions of genetic and environmental factors changed across adolescence and young adulthood. More specifically, smoking initiation and current quantity smoked remain separate constructs until young adulthood, when liabilities are correlated. Smoking initiation is explained by genetic, shared, and unique environmental factors in early adolescence and by genetic and unique environmental factors in young adulthood; while current quantity smoked is explained by shared environmental and unique environmental factors until young adulthood, when genetic and unique environmental factors play a larger role. PMID:25662421
Do, Elizabeth K; Prom-Wormley, Elizabeth C; Eaves, Lindon J; Silberg, Judy L; Miles, Donna R; Maes, Hermine H
2015-02-01
Little is known regarding the underlying relationship between smoking initiation and current quantity smoked during adolescence into young adulthood. It is possible that the influences of genetic and environmental factors on this relationship vary across sex and age. To investigate this further, the current study applied a common causal contingency model to data from a Virginia-based twin study to determine: (1) if the same genetic and environmental factors are contributing to smoking initiation and current quantity smoked; (2) whether the magnitude of genetic and environmental factor contributions are the same across adolescence and young adulthood; and (3) if qualitative and quantitative differences in the sources of variance between males and females exist. Study results found no qualitative or quantitative sex differences in the relationship between smoking initiation and current quantity smoked, though relative contributions of genetic and environmental factors changed across adolescence and young adulthood. More specifically, smoking initiation and current quantity smoked remain separate constructs until young adulthood, when liabilities are correlated. Smoking initiation is explained by genetic, shared, and unique environmental factors in early adolescence and by genetic and unique environmental factors in young adulthood; while current quantity smoked is explained by shared environmental and unique environmental factors until young adulthood, when genetic and unique environmental factors play a larger role.
NASA Astrophysics Data System (ADS)
Zhang, Zaiqin; Ma, Hui; Liu, Zhiyuan; Geng, Yingsan; Wang, Jianhua
2018-04-01
The influence of the applied axial magnetic field on the current density distribution in the arc column and electrodes is intensively studied. However, the previous results only provide a qualitative explanation, which cannot quantitatively explain a recent experimental data on anode current density. The objective of this paper is to quantitatively determine the current constriction subjected to an axial magnetic field in high-current vacuum arcs according to the recent experimental data. A magnetohydrodynamic model is adopted to describe the high current vacuum arcs. The vacuum arc is in a diffuse arc mode with an arc current ranged from 6 kArms to 14 kArms and an axial magnetic field ranged from 20 mT to 110 mT. By a comparison of the recent experimental work of current density distribution on the anode, the modelling results show that there are two types of current constriction. On one hand, the current on the cathode shows a constriction, and this constriction is termed as the cathode-constriction. On the other hand, the current constricts in the arc column region, and this constriction is termed as the column-constriction. The cathode boundary is of vital importance in a quantitative model. An improved cathode constriction boundary is proposed. Under the improved boundary, the simulation results are in good agreement with the recent experimental data on the anode current density distribution. It is demonstrated that the current density distribution at the anode is sensitive to that at the cathode, so that measurements of the anode current density can be used, in combination with the vacuum arc model, to infer the cathode current density distribution.
Lai, Yin-Hung; Chen, Bo-Gaun; Lee, Yuan Tseh; Wang, Yi-Sheng; Lin, Sheng Hsien
2014-08-15
Although several reaction models have been proposed in the literature to explain matrix-assisted laser desorption/ionization (MALDI), further study is still necessary to explore the important ionization pathways that occur under the high-temperature environment of MALDI. 2,4,6-Trihydroxyacetophenone (THAP) is an ideal compound for evaluating the contribution of thermal energy to an initial reaction with minimum side reactions. Desorbed neutral THAP and ions were measured using a crossed-molecular beam machine and commercial MALDI-TOF instrument, respectively. A quantitative model incorporating an Arrhenius-type desorption rate derived from transition state theory was proposed. Reaction enthalpy was calculated using GAUSSIAN 03 software with dielectric effect. Additional evidence of thermal-induced proton disproportionation was given by the indirect ionization of THAP embedded in excess fullerene molecules excited by a 450 nm laser. The quantitative model predicted that proton disproportionation of THAP would be achieved by thermal energy converted from a commonly used single UV laser photon. The dielectric effect reduced the reaction Gibbs free energy considerably even when the dielectric constant was reduced under high-temperature MALDI conditions. With minimum fitting parameters, observations of pure THAP and THAP mixed with fullerene both agreed with predictions. Proton disproportionation of solid THAP was energetically favorable with a single UV laser photon. The quantitative model revealed an important initial ionization pathway induced by the abrupt heating of matrix crystals. In the matrix crystals, the dielectric effect reduced reaction Gibbs free energy under typical MALDI conditions. The result suggested that thermal energy plays an important role in the initial ionization reaction of THAP. Copyright © 2014 John Wiley & Sons, Ltd.
A System Computational Model of Implicit Emotional Learning
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898
A System Computational Model of Implicit Emotional Learning.
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.
NASA Astrophysics Data System (ADS)
Shao, Xinxian
Bacterial infections are very common in human society. Thus extensive research has been conducted to reveal the molecular mechanisms of the pathogenesis and to evaluate the antibiotics' efficacy against bacteria. Little is known, however, about the population dynamics of bacterial populations and their interactions with the host's immune system. In this dissertation, a stochatic model is developed featuring stochastic phenotypic switching of bacterial individuals to explain the single-variant bottleneck discovered in multi strain bacterial infections. I explored early events in a bacterial infection establishment using classical experiments of Moxon and Murphy on neonatal rats. I showed that the minimal model and its simple variants do not work. I proposed modifications to the model that could explain the data quantitatively. The bacterial infections are also commonly established in physical structures, as biofilms or 3-d colonies. In contrast, most research on antibiotic treatment of bacterial infections has been conducted in well-mixed liquid cultures. I explored the efficacy of antibiotics to treat such bacterial colonies, a broadly applicable method is designed and evaluated where discrete bacterial colonies on 2-d surfaces were exposed to antibiotics. I discuss possible explanations and hypotheses for the experimental results. To verify these hypotheses, we investigated the dynamics of bacterial population as 3-d colonies. We showed that a minimal mathematical model of bacterial colony growth in 3-d was able to account for the experimentally observed presence of a diffusion-limited regime. The model further revealed highly loose packing of the cells in 3-d colonies and smaller cell sizes in colonies than plancktonic cells in corresponding liquid culture. Further experimental tests of the model predictions have revealed that the ratio of the cell size in liquid culture to that in colony cultures was consistent with the model prediction, that the dead cells emerged randomly in a colony, and that the cells packed heterogeneously in the outer part of a colony, possibly explaining the loose packing.
Aspiration of human neutrophils: effects of shear thinning and cortical dissipation.
Drury, J L; Dembo, M
2001-12-01
It is generally accepted that the human neutrophil can be mechanically represented as a droplet of polymeric fluid enclosed by some sort of thin slippery viscoelastic cortex. Many questions remain however about the detailed rheology and chemistry of the interior fluid and the cortex. To address these quantitative issues, we have used a finite element method to simulate the dynamics of neutrophils during micropipet aspiration using various plausible assumptions. The results were then systematically compared with aspiration experiments conducted at eight different combinations of pipet size and pressure. Models in which the cytoplasm was represented by a simple Newtonian fluid (i.e., models without shear thinning) were grossly incapable of accounting for the effects of pressure on the general time scale of neutrophil aspiration. Likewise, models in which the cortex was purely elastic (i.e., models without surface viscosity) were unable to explain the effects of pipet size on the general aspiration rate. Such models also failed to explain the rapid acceleration of the aspiration rate during the final phase of aspiration nor could they account for the geometry of the neutrophil during various phases of aspiration. Thus, our results indicate that a minimal mechanical model of the neutrophil needs to incorporate both shear thinning and surface viscosity to remain valid over a reasonable range of conditions. At low shear rates, the surface dilatation viscosity of the neutrophil was found to be on the order of 100 poise-cm, whereas the viscosity of the interior cytoplasm was on the order of 1000 poise. Both the surface viscosity and the interior viscosity seem to decrease in a similar fashion when the shear rate exceeds approximately 0.05 s(-1). Unfortunately, even models with both surface viscosity and shear thinning studied are still not sufficient to fully explain all the features of neutrophil aspiration. In particular, the very high rate of aspiration during the initial moments after ramping of pressure remains mysterious.
Aspiration of human neutrophils: effects of shear thinning and cortical dissipation.
Drury, J L; Dembo, M
2001-01-01
It is generally accepted that the human neutrophil can be mechanically represented as a droplet of polymeric fluid enclosed by some sort of thin slippery viscoelastic cortex. Many questions remain however about the detailed rheology and chemistry of the interior fluid and the cortex. To address these quantitative issues, we have used a finite element method to simulate the dynamics of neutrophils during micropipet aspiration using various plausible assumptions. The results were then systematically compared with aspiration experiments conducted at eight different combinations of pipet size and pressure. Models in which the cytoplasm was represented by a simple Newtonian fluid (i.e., models without shear thinning) were grossly incapable of accounting for the effects of pressure on the general time scale of neutrophil aspiration. Likewise, models in which the cortex was purely elastic (i.e., models without surface viscosity) were unable to explain the effects of pipet size on the general aspiration rate. Such models also failed to explain the rapid acceleration of the aspiration rate during the final phase of aspiration nor could they account for the geometry of the neutrophil during various phases of aspiration. Thus, our results indicate that a minimal mechanical model of the neutrophil needs to incorporate both shear thinning and surface viscosity to remain valid over a reasonable range of conditions. At low shear rates, the surface dilatation viscosity of the neutrophil was found to be on the order of 100 poise-cm, whereas the viscosity of the interior cytoplasm was on the order of 1000 poise. Both the surface viscosity and the interior viscosity seem to decrease in a similar fashion when the shear rate exceeds approximately 0.05 s(-1). Unfortunately, even models with both surface viscosity and shear thinning studied are still not sufficient to fully explain all the features of neutrophil aspiration. In particular, the very high rate of aspiration during the initial moments after ramping of pressure remains mysterious. PMID:11720983
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
On agent-based modeling and computational social science
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642
A minimal model of epithelial tissue dynamics and its application to the corneal epithelium
NASA Astrophysics Data System (ADS)
Henkes, Silke; Matoz-Fernandez, Daniel; Kostanjevec, Kaja; Coburn, Luke; Sknepnek, Rastko; Collinson, J. Martin; Martens, Kirsten
Epithelial cell sheets are characterized by a complex interplay of active drivers, including cell motility, cell division and extrusion. Here we construct a particle-based minimal model tissue with only division/death dynamics and show that it always corresponds to a liquid state with a single dynamic time scale set by the division rate, and that no glassy phase is possible. Building on this, we construct an in-silico model of the mammalian corneal epithelium as such a tissue confined to a hemisphere bordered by the limbal stem cell zone. With added cell motility dynamics we are able to explain the steady-state spiral migration on the cornea, including the central vortex defect, and quantitatively compare it to eyes obtained from mice that are X-inactivation mosaic for LacZ.
Kalinowska, Barbara; Banach, Mateusz; Konieczny, Leszek; Marchewka, Damian; Roterman, Irena
2014-01-01
This work discusses the role of unstructured polypeptide chain fragments in shaping the protein's hydrophobic core. Based on the "fuzzy oil drop" model, which assumes an idealized distribution of hydrophobicity density described by the 3D Gaussian, we can determine which fragments make up the core and pinpoint residues whose location conflicts with theoretical predictions. We show that the structural influence of the water environment determines the positions of disordered fragments, leading to the formation of a hydrophobic core overlaid by a hydrophilic mantle. This phenomenon is further described by studying selected proteins which are known to be unstable and contain intrinsically disordered fragments. Their properties are established quantitatively, explaining the causative relation between the protein's structure and function and facilitating further comparative analyses of various structural models. © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
van der Marel, Roeland P.; van Dokkum, Pieter G.
2007-10-01
We present spatially resolved stellar rotation velocity and velocity dispersion profiles from Keck/LRIS absorption-line spectra for 25 galaxies, mostly visually classified ellipticals, in three clusters at z~0.5. We interpret the kinematical data and HST photometry using oblate axisymmetric two-integral f(E,Lz) dynamical models based on the Jeans equations. This yields good fits, provided that the seeing and observational characteristics are carefully modeled. The fits yield for each galaxy the dynamical mass-to-light ratio (M/L) and a measure of the galaxy rotation rate. Paper II addresses the implied M/L evolution. Here we study the rotation-rate evolution by comparison to a sample of local elliptical galaxies of similar present-day luminosity. The brightest galaxies in the sample all rotate too slowly to account for their flattening, as is also observed at z=0. But the average rotation rate is higher at z~0.5 than locally. This may be due to a higher fraction of misclassified S0 galaxies (although this effect is insufficient to explain the observed strong evolution of the cluster S0 fraction with redshift). Alternatively, dry mergers between early-type galaxies may have decreased the average rotation rate over time. It is unclear whether such mergers are numerous enough in clusters to explain the observed trend quantitatively. Disk-disk mergers may affect the comparison through the so-called ``progenitor bias,'' but this cannot explain the direction of the observed rotation-rate evolution. Additional samples are needed to constrain possible environmental dependencies and cosmic variance in galaxy rotation rates. Either way, studies of the internal stellar dynamics of distant galaxies provide a valuable new approach for exploring galaxy evolution.
Fawcett, Tim W.; Higginson, Andrew D.; Metsä-Simola, Niina; Hagen, Edward H.; Houston, Alasdair I.; Martikainen, Pekka
2017-01-01
Divorce is associated with an increased probability of a depressive episode, but the causation of events remains unclear. Adaptive models of depression propose that depression is a social strategy in part, whereas non-adaptive models tend to propose a diathesis-stress mechanism. We compare an adaptive evolutionary model of depression to three alternative non-adaptive models with respect to their ability to explain the temporal pattern of depression around the time of divorce. Register-based data (304,112 individuals drawn from a random sample of 11% of Finnish people) on antidepressant purchases is used as a proxy for depression. This proxy affords an unprecedented temporal resolution (a 3-monthly prevalence estimates over 10 years) without any bias from non-compliance, and it can be linked with underlying episodes via a statistical model. The evolutionary-adaptation model (all time periods with risk of divorce are depressogenic) was the best quantitative description of the data. The non-adaptive stress-relief model (period before divorce is depressogenic and period afterwards is not) provided the second best quantitative description of the data. The peak-stress model (periods before and after divorce can be depressogenic) fit the data less well, and the stress-induction model (period following divorce is depressogenic and the preceding period is not) did not fit the data at all. The evolutionary model was the most detailed mechanistic description of the divorce-depression link among the models, and the best fit in terms of predicted curvature; thus, it offers most rigorous hypotheses for further study. The stress-relief model also fit very well and was the best model in a sensitivity analysis, encouraging development of more mechanistic models for that hypothesis. PMID:28614385
Rosenström, Tom; Fawcett, Tim W; Higginson, Andrew D; Metsä-Simola, Niina; Hagen, Edward H; Houston, Alasdair I; Martikainen, Pekka
2017-01-01
Divorce is associated with an increased probability of a depressive episode, but the causation of events remains unclear. Adaptive models of depression propose that depression is a social strategy in part, whereas non-adaptive models tend to propose a diathesis-stress mechanism. We compare an adaptive evolutionary model of depression to three alternative non-adaptive models with respect to their ability to explain the temporal pattern of depression around the time of divorce. Register-based data (304,112 individuals drawn from a random sample of 11% of Finnish people) on antidepressant purchases is used as a proxy for depression. This proxy affords an unprecedented temporal resolution (a 3-monthly prevalence estimates over 10 years) without any bias from non-compliance, and it can be linked with underlying episodes via a statistical model. The evolutionary-adaptation model (all time periods with risk of divorce are depressogenic) was the best quantitative description of the data. The non-adaptive stress-relief model (period before divorce is depressogenic and period afterwards is not) provided the second best quantitative description of the data. The peak-stress model (periods before and after divorce can be depressogenic) fit the data less well, and the stress-induction model (period following divorce is depressogenic and the preceding period is not) did not fit the data at all. The evolutionary model was the most detailed mechanistic description of the divorce-depression link among the models, and the best fit in terms of predicted curvature; thus, it offers most rigorous hypotheses for further study. The stress-relief model also fit very well and was the best model in a sensitivity analysis, encouraging development of more mechanistic models for that hypothesis.
The neural optimal control hierarchy for motor control
NASA Astrophysics Data System (ADS)
DeWolf, T.; Eliasmith, C.
2011-10-01
Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.
ERIC Educational Resources Information Center
Burdett, Kimberli R.
2013-01-01
The purpose of this study was to gain a better understanding of how current internet-based resources are affecting the college choice process. An explanatory mixed methods design was used, and the study involved collecting qualitative data after a quantitative phase to explain the quantitative data in greater depth. An additional study was…
ERIC Educational Resources Information Center
Parmelee, John H.; Perkins, Stephynie C.; Sayre, Judith J.
2007-01-01
This study uses a sequential transformative mixed methods research design to explain how political advertising fails to engage college students. Qualitative focus groups examined how college students interpret the value of political advertising to them, and a quantitative manifest content analysis concerning ad framing of more than 100 ads from…
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
Effective S =2 antiferromagnetic spin chain in the salt (o -MePy-V)FeCl4
NASA Astrophysics Data System (ADS)
Iwasaki, Y.; Kida, T.; Hagiwara, M.; Kawakami, T.; Hosokoshi, Y.; Tamekuni, Y.; Yamaguchi, H.
2018-02-01
We present a model compound for the S =2 antiferromagnetic (AF) spin chain composed of the salt (o -MePy-V ) FeCl4 . Ab initio molecular-orbital calculations indicate the formation of a partially stacked two-dimensional (2D) spin model comprising five types of exchange interactions between S =1 /2 and S =5 /2 spins, which locate on verdazyl radical and Fe ion, respectively. The magnetic properties of the synthesized crystals indicate that the dominant interaction between the S =1 /2 and S =5 /2 spins stabilizes an S =2 spin in the low-temperature region, and an effective S =2 AF chain is formed for T ≪10 K and H <4 T. We explain the magnetization curve and electron-spin-resonance modes quantitatively based on the S =2 AF chain. At higher fields above quantitatively 4 T, the magnetization curve assumes two-thirds of the full saturation value for fields between 4 and 20 T, and approaches saturation at ˜40 T. The spin model in the high-field region can be considered as a quasi-2D S =1 /2 honeycomb lattice under an effective internal field caused by the fully polarized S =5 /2 spin.
NASA Astrophysics Data System (ADS)
Fiedler, A.; Schewski, R.; Baldini, M.; Galazka, Z.; Wagner, G.; Albrecht, M.; Irmscher, K.
2017-10-01
We present a quantitative model that addresses the influence of incoherent twin boundaries on the electrical properties in β-Ga2O3. This model can explain the mobility collapse below a threshold electron concentration of 1 × 1018 cm-3 as well as partly the low doping efficiency in β-Ga2O3 layers grown homoepitaxially by metal-organic vapor phase epitaxy on (100) substrates of only slight off-orientation. A structural analysis by transmission electron microscopy (TEM) reveals a high density of twin lamellae in these layers. In contrast to the coherent twin boundaries parallel to the (100) plane, the lateral incoherent twin boundaries exhibit one dangling bond per unit cell that acts as an acceptor-like electron trap. Since the twin lamellae are thin, we consider the incoherent twin boundaries to be line defects with a density of 1011-1012 cm-2 as determined by TEM. We estimate the influence of the incoherent twin boundaries on the electrical transport properties by adapting Read's model of charged dislocations. Our calculations quantitatively confirm that the mobility reduction and collapse as well as partly the compensation are due to the presence of twin lamellae.
Hu, Bo; Tu, Yuhai
2013-01-01
It is essential for bacteria to find optimal conditions for their growth and survival. The optimal levels of certain environmental factors (such as pH and temperature) often correspond to some intermediate points of the respective gradients. This requires the ability of bacteria to navigate from both directions toward the optimum location and is distinct from the conventional unidirectional chemotactic strategy. Remarkably, Escherichia coli cells can perform such a precision sensing task in pH taxis by using the same chemotaxis machinery, but with opposite pH responses from two different chemoreceptors (Tar and Tsr). To understand bacterial pH sensing, we developed an Ising-type model for a mixed cluster of opposing receptors based on the push-pull mechanism. Our model can quantitatively explain experimental observations in pH taxis for various mutants and wild-type cells. We show how the preferred pH level depends on the relative abundance of the competing sensors and how the sensory activity regulates the behavioral response. Our model allows us to make quantitative predictions on signal integration of pH and chemoattractant stimuli. Our study reveals two general conditions and a robust push-pull scheme for precision sensing, which should be applicable in other adaptive sensory systems with opposing gradient sensors. PMID:23823247
Vallejo, Roger L.; Liu, Sixin; Gao, Guangtu; Fragomeni, Breno O.; Hernandez, Alvaro G.; Leeds, Timothy D.; Parsons, James E.; Martin, Kyle E.; Evenhuis, Jason P.; Welch, Timothy J.; Wiens, Gregory D.; Palti, Yniv
2017-01-01
Bacterial cold water disease (BCWD) causes significant mortality and economic losses in salmonid aquaculture. In previous studies, we identified moderate-large effect quantitative trait loci (QTL) for BCWD resistance in rainbow trout (Oncorhynchus mykiss). However, the recent availability of a 57 K SNP array and a reference genome assembly have enabled us to conduct genome-wide association studies (GWAS) that overcome several experimental limitations from our previous work. In the current study, we conducted GWAS for BCWD resistance in two rainbow trout breeding populations using two genotyping platforms, the 57 K Affymetrix SNP array and restriction-associated DNA (RAD) sequencing. Overall, we identified 14 moderate-large effect QTL that explained up to 60.8% of the genetic variance in one of the two populations and 27.7% in the other. Four of these QTL were found in both populations explaining a substantial proportion of the variance, although major differences were also detected between the two populations. Our results confirm that BCWD resistance is controlled by the oligogenic inheritance of few moderate-large effect loci and a large-unknown number of loci each having a small effect on BCWD resistance. We detected differences in QTL number and genome location between two GWAS models (weighted single-step GBLUP and Bayes B), which highlights the utility of using different models to uncover QTL. The RAD-SNPs detected a greater number of QTL than the 57 K SNP array in one population, suggesting that the RAD-SNPs may uncover polymorphisms that are more unique and informative for the specific population in which they were discovered. PMID:29109734
Vallejo, Roger L; Liu, Sixin; Gao, Guangtu; Fragomeni, Breno O; Hernandez, Alvaro G; Leeds, Timothy D; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Welch, Timothy J; Wiens, Gregory D; Palti, Yniv
2017-01-01
Bacterial cold water disease (BCWD) causes significant mortality and economic losses in salmonid aquaculture. In previous studies, we identified moderate-large effect quantitative trait loci (QTL) for BCWD resistance in rainbow trout ( Oncorhynchus mykiss ). However, the recent availability of a 57 K SNP array and a reference genome assembly have enabled us to conduct genome-wide association studies (GWAS) that overcome several experimental limitations from our previous work. In the current study, we conducted GWAS for BCWD resistance in two rainbow trout breeding populations using two genotyping platforms, the 57 K Affymetrix SNP array and restriction-associated DNA (RAD) sequencing. Overall, we identified 14 moderate-large effect QTL that explained up to 60.8% of the genetic variance in one of the two populations and 27.7% in the other. Four of these QTL were found in both populations explaining a substantial proportion of the variance, although major differences were also detected between the two populations. Our results confirm that BCWD resistance is controlled by the oligogenic inheritance of few moderate-large effect loci and a large-unknown number of loci each having a small effect on BCWD resistance. We detected differences in QTL number and genome location between two GWAS models (weighted single-step GBLUP and Bayes B), which highlights the utility of using different models to uncover QTL. The RAD-SNPs detected a greater number of QTL than the 57 K SNP array in one population, suggesting that the RAD-SNPs may uncover polymorphisms that are more unique and informative for the specific population in which they were discovered.
How do Turkish High School Graduates Use the Wave Theory of Light to Explain Optics Phenomena?
ERIC Educational Resources Information Center
Sengoren, S. K.
2010-01-01
This research was intended to investigate whether Turkish students who had graduated from high school used the wave theory of light properly in explaining optical phenomena. The survey method was used in this research. The data, which were collected from 175 first year university students in Turkey, were analysed quantitatively and qualitatively.…
Direct observation of surface-state thermal oscillations in SmB6 oscillators
NASA Astrophysics Data System (ADS)
Casas, Brian; Stern, Alex; Efimkin, Dmitry K.; Fisk, Zachary; Xia, Jing
2018-01-01
SmB6 is a mixed valence Kondo insulator that exhibits a sharp increase in resistance following an activated behavior that levels off and saturates below 4 K. This behavior can be explained by the proposal of SmB6 representing a new state of matter, a topological Kondo insulator, in which a Kondo gap is developed, and topologically protected surface conduction dominates low-temperature transport. Exploiting its nonlinear dynamics, a tunable SmB6 oscillator device was recently demonstrated, where a small dc current generates large oscillating voltages at frequencies from a few Hz to hundreds of MHz. This behavior was explained by a theoretical model describing the thermal and electronic dynamics of coupled surface and bulk states. However, a crucial aspect of this model, the predicted temperature oscillation in the surface state, has not been experimentally observed to date. This is largely due to the technical difficulty of detecting an oscillating temperature of the very thin surface state. Here we report direct measurements of the time-dependent surface-state temperature in SmB6 with a RuO2 microthermometer. Our results agree quantitatively with the theoretically simulated temperature waveform, and hence support the validity of the oscillator model, which will provide accurate theoretical guidance for developing future SmB6 oscillators at higher frequencies.
Zhu, Dan; Ciais, Philippe; Chang, Jinfeng; Krinner, Gerhard; Peng, Shushi; Viovy, Nicolas; Peñuelas, Josep; Zimov, Sergey
2018-04-01
Large herbivores are a major agent in ecosystems, influencing vegetation structure, and carbon and nutrient flows. During the last glacial period, a mammoth steppe ecosystem prevailed in the unglaciated northern lands, supporting a high diversity and density of megafaunal herbivores. The apparent discrepancy between abundant megafauna and the expected low vegetation productivity under a generally harsher climate with a lower CO 2 concentration, termed the productivity paradox, requires large-scale quantitative analysis using process-based ecosystem models. However, most of the current global dynamic vegetation models (DGVMs) lack explicit representation of large herbivores. Here we incorporated a grazing module in a DGVM based on physiological and demographic equations for wild large grazers, taking into account feedbacks of large grazers on vegetation. The model was applied globally for present-day and the Last Glacial Maximum (LGM). The present-day results of potential grazer biomass, combined with an empirical land-use map, infer a reduction in wild grazer biomass by 79-93% owing to anthropogenic land replacement of natural grasslands. For the LGM, we find that the larger mean body size of mammalian herbivores than today is the crucial clue to explain the productivity paradox, due to a more efficient exploitation of grass production by grazers with a large body size.
Structural and electron diffraction scaling of twisted graphene bilayers
NASA Astrophysics Data System (ADS)
Zhang, Kuan; Tadmor, Ellad B.
2018-03-01
Multiscale simulations are used to study the structural relaxation in twisted graphene bilayers and the associated electron diffraction patterns. The initial twist forms an incommensurate moiré pattern that relaxes to a commensurate microstructure comprised of a repeating pattern of alternating low-energy AB and BA domains surrounding a high-energy AA domain. The simulations show that the relaxation mechanism involves a localized rotation and shrinking of the AA domains that scales in two regimes with the imposed twist. For small twisting angles, the localized rotation tends to a constant; for large twist, the rotation scales linearly with it. This behavior is tied to the inverse scaling of the moiré pattern size with twist angle and is explained theoretically using a linear elasticity model. The results are validated experimentally through a simulated electron diffraction analysis of the relaxed structures. A complex electron diffraction pattern involving the appearance of weak satellite peaks is predicted for the small twist regime. This new diffraction pattern is explained using an analytical model in which the relaxation kinematics are described as an exponentially-decaying (Gaussian) rotation field centered on the AA domains. Both the angle-dependent scaling and diffraction patterns are in quantitative agreement with experimental observations. A Matlab program for extracting the Gaussian model parameters accompanies this paper.
Rix, Michael G; Edwards, Danielle L; Byrne, Margaret; Harvey, Mark S; Joseph, Leo; Roberts, J Dale
2015-08-01
The south-western land division of Western Australia (SWWA), bordering the temperate Southern and Indian Oceans, is the only global biodiversity hotspot recognised in Australia. Renowned for its extraordinary diversity of endemic plants, and for some of the largest and most botanically significant temperate heathlands and woodlands on Earth, SWWA has long fascinated biogeographers. Its flat, highly weathered topography and the apparent absence of major geographic factors usually implicated in biotic diversification have challenged attempts to explain patterns of biogeography and mechanisms of speciation in the region. Botanical studies have always been central to understanding the biodiversity values of SWWA, although surprisingly few quantitative botanical analyses have allowed for an understanding of historical biogeographic processes in both space and time. Faunistic studies, by contrast, have played little or no role in defining hotspot concepts, despite several decades of accumulating quantitative research on the phylogeny and phylogeography of multiple lineages. In this review we critically analyse datasets with explicit supporting phylogenetic data and estimates of the time since divergence for all available elements of the terrestrial fauna, and compare these datasets to those available for plants. In situ speciation has played more of a role in shaping the south-western Australian fauna than has long been supposed, and has occurred in numerous endemic lineages of freshwater fish, frogs, reptiles, snails and less-vagile arthropods. By contrast, relatively low levels of endemism are found in birds, mammals and highly dispersive insects, and in situ speciation has played a negligible role in generating local endemism in birds and mammals. Quantitative studies provide evidence for at least four mechanisms driving patterns of endemism in south-western Australian animals, including: (i) relictualism of ancient Gondwanan or Pangaean taxa in the High Rainfall Province; (ii) vicariant isolation of lineages west of the Nullarbor divide; (iii) in situ speciation; and (iv) recent population subdivision. From dated quantitative studies we derive four testable models of historical biogeography for animal taxa in SWWA, each explicit in providing a spatial, temporal and topological perspective on patterns of speciation or divergence. For each model we also propose candidate lineages that may be worthy of further study, given what we know of their taxonomy, distributions or relationships. These models formalise four of the strongest patterns seen in many animal taxa from SWWA, although other models are clearly required to explain particular, idiosyncratic patterns. Generating numerous new datasets for suites of co-occurring lineages in SWWA will help refine our understanding of the historical biogeography of the region, highlight gaps in our knowledge, and allow us to derive general postulates from quantitative (rather than qualitative) results. For animals, this process has now begun in earnest, as has the process of taxonomically documenting many of the more diverse invertebrate lineages. The latter remains central to any attempt to appreciate holistically biogeographic patterns and processes in SWWA, and molecular phylogenetic studies should - where possible - also lead to tangible taxonomic outcomes. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.
Watson, Roger
2015-04-01
This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.
Contraction and stress-dependent growth shape the forebrain of the early chicken embryo.
Garcia, Kara E; Okamoto, Ruth J; Bayly, Philip V; Taber, Larry A
2017-01-01
During early vertebrate development, local constrictions, or sulci, form to divide the forebrain into the diencephalon, telencephalon, and optic vesicles. These partitions are maintained and exaggerated as the brain tube inflates, grows, and bends. Combining quantitative experiments on chick embryos with computational modeling, we investigated the biophysical mechanisms that drive these changes in brain shape. Chemical perturbations of contractility indicated that actomyosin contraction plays a major role in the creation of initial constrictions (Hamburger-Hamilton stages HH11-12), and fluorescent staining revealed that F-actin is circumferentially aligned at all constrictions. A finite element model based on these findings shows that the observed shape changes are consistent with circumferential contraction in these regions. To explain why sulci continue to deepen as the forebrain expands (HH12-20), we speculate that growth depends on wall stress. This idea was examined by including stress-dependent growth in a model with cerebrospinal fluid pressure and bending (cephalic flexure). The results given by the model agree with observed morphological changes that occur in the brain tube under normal and reduced eCSF pressure, quantitative measurements of relative sulcal depth versus time, and previously published patterns of cell proliferation. Taken together, our results support a biphasic mechanism for forebrain morphogenesis consisting of differential contractility (early) and stress-dependent growth (late). Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantifying reactive transport processes governing arsenic mobility in a Bengal Delta aquifer
NASA Astrophysics Data System (ADS)
Rawson, Joey; Neidhardt, Harald; Siade, Adam; Berg, Michael; Prommer, Henning
2017-04-01
Over the last few decades significant progress has been made to characterize the extent and severity of groundwater arsenic pollution in S/SE Asia, and to understand the underlying geochemical processes. However, comparably little effort has been made to merge the findings from this research into quantitative frameworks that allow for a process-based quantitative analysis of observed arsenic behavior and predictions of its future fate. Therefore, this study developed and tested field-scale numerical modelling approaches to represent the primary and secondary geochemical processes associated with the reductive dissolution of Fe-oxy(hydr)oxides and the concomitant release of sorbed arsenic. We employed data from an in situ field experiment in the Bengal Delta Plain, which investigated the influence of labile organic matter (sucrose) on the mobility of Fe, Mn, and As. The data collected during the field experiment were used to guide our model development and to constrain the model parameterisation. Our results show that sucrose oxidation coupled to the reductive dissolution of Fe-oxy(hydr)oxides was accompanied by multiple secondary geochemical reactions that are not easily and uniquely identifiable and quantifiable. Those secondary reactions can explain the disparity between the observed Fe and As behavior. Our modelling results suggest that a significant fraction of the released As is scavenged through (co-)precipitation with newly formed Fe-minerals, specifically magnetite, rather than through sorption to pre-existing and freshly precipitated iron minerals.
DeLong, John P; Hanley, Torrance C
2013-01-01
The identification of trade-offs is necessary for understanding the evolution and maintenance of diversity. Here we employ the supply-demand (SD) body size optimization model to predict a trade-off between asymptotic body size and growth rate. We use the SD model to quantitatively predict the slope of the relationship between asymptotic body size and growth rate under high and low food regimes and then test the predictions against observations for Daphnia ambigua. Close quantitative agreement between observed and predicted slopes at both food levels lends support to the model and confirms that a 'rate-size' trade-off structures life history variation in this population. In contrast to classic life history expectations, growth and reproduction were positively correlated after controlling for the rate-size trade-off. We included 12 Daphnia clones in our study, but clone identity explained only some of the variation in life history traits. We also tested the hypothesis that growth rate would be positively related to intergenic spacer length (i.e. the growth rate hypothesis) across clones, but we found that clones with intermediate intergenic spacer lengths had larger asymptotic sizes and slower growth rates. Our results strongly support a resource-based optimization of body size following the SD model. Furthermore, because some resource allocation decisions necessarily precede others, understanding interdependent life history traits may require a more nested approach.
A semantic web framework to integrate cancer omics data with biological knowledge.
Holford, Matthew E; McCusker, James P; Cheung, Kei-Hoi; Krauthammer, Michael
2012-01-25
The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily.
Strains on the nano- and microscale in nickel-titanium: An advanced TEM study
NASA Astrophysics Data System (ADS)
Tirry, Wim
2007-12-01
A general introduction to shape memory behavior and the martensitic transformation is given in chapter 1, with speck information concerning the NiTi material. The technique used to study the material is transmission electron microscopy (TEM) of which the basics are explained in chapter 2 as well as information concerning the NiTi material. The main goal was to apply more advanced TEM techniques in order to measure some aspects in a quantitative way rather than qualitative, which is mostly the case in conventional TEM. (1) Quantitative electron diffraction was used to refine the structure of Ni4Ti3 precipitates, this was done by using the MSLS method in combination with density functional theory (DFT) calculations. (2) These Ni4Ti3 precipitates are (semi-)coherent which results in a strain field in the matrix close to the precipitate. High resolution TEM (HRTEM) in combination with image processing techniques was used to measure these strain fields. The obtained results are compared to the Eshelby model for elliptical inclusions, and major difference is an underestimation of the strain magnitude by the model. One of the algorithms used to extract strain information from HRTEM images is the geometric phase method. (3) The Ni4Ti3-Ni4Ti3 and Ni4Ti3-precipitate interface was investigated with HRTEM showing that the Ni4Ti3-precipitate interface might be diffuse over a range of 3nm. (4) In-situ straining experiments were performed on single crystalline and superelastic polycrystalline NiTi samples. It seems that the strain induced martensite planes in the polycrystalline sample show no sign of twinning. This is in contradiction to what is expected and is discussed in the view of the crystallographic theory of martensite, in addition a first model explaining this behavior is proposed. In this dissertation the main attention is divided over the material aspects of NiTi and on how to apply these more advanced TEM techniques.
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.
Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.
Yalcin, Semra; Leroux, Shawn James
2018-04-14
Land-cover and climate change are two main drivers of changes in species ranges. Yet, the majority of studies investigating the impacts of global change on biodiversity focus on one global change driver and usually use simulations to project biodiversity responses to future conditions. We conduct an empirical test of the relative and combined effects of land-cover and climate change on species occurrence changes. Specifically, we examine whether observed local colonization and extinctions of North American birds between 1981-1985 and 2001-2005 are correlated with land-cover and climate change and whether bird life history and ecological traits explain interspecific variation in observed occurrence changes. We fit logistic regression models to test the impact of physical land-cover change, changes in net primary productivity, winter precipitation, mean summer temperature, and mean winter temperature on the probability of Ontario breeding bird local colonization and extinction. Models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local colonization for 30%, 27%, and 29% of species, respectively. Conversely, models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local extinction for 61%, 7%, and 9% of species, respectively. The quantitative impacts of land-cover and climate change variables also vary among bird species. We then fit linear regression models to test whether the variation in regional colonization and extinction rate could be explained by mean body mass, migratory strategy, and habitat preference of birds. Overall, species traits were weakly correlated with heterogeneity in species occurrence changes. We provide empirical evidence showing that land-cover change, climate change, and the combination of multiple global change drivers can differentially explain observed species local colonization and extinction. © 2018 John Wiley & Sons Ltd.
Lamouroux, N.; Poff, N.L.; Angermeier, P.L.
2002-01-01
Community convergence across biogeographically distinct regions suggests the existence of key, repeated, evolutionary mechanisms relating community characteristics to the environment. However, convergence studies at the community level often involve only qualitative comparisons of the environment and may fail to identify which environmental variables drive community structure. We tested the hypothesis that the biological traits of fish communities on two continents (Europe and North America) are similarly related to environmental conditions. Specifically, from observations of individual fish made at the microhabitat scale (a few square meters) within French streams, we generated habitat preference models linking traits of fish species to local scale hydraulic conditions (Froude number), Using this information, we then predicted how hydraulics and geomorphology at the larger scale of stream reaches (several pool-riffle sequences) should quantitatively influence the trait composition of fish communities. Trait composition for fishes in stream reaches with low Froude number at low flow or high proportion of pools was predicted as nonbenthic, large, fecund, long-lived, nonstreamlined, and weak swimmers. We tested our predictions in contrasting stream reaches in France (n = 11) and Virginia, USA (n = 76), using analyses of covariance to quantify the relative influence of continent vs. physical habitat variables on fish traits. The reach-scale convergence analysis indicated that trait proportions in the communities differed between continents (up to 55% of the variance in each trait was explained by "continent"), partly due to distinct evolutionary histories. However, within continents, trait proportions were comparably related to the hydraulic and geomorphic variables (up to 54% of the variance within continents explained). In particular, a synthetic measure of fish traits in reaches was well explained (50% of its variance) by the Froude number independently of the continent. The effect of physical variables did not differ across continents for most traits, confirming our predictions qualitatively and quantitatively. Therefore, despite phylogenetic and historical differences between continents, fish communities of France and Virginia exhibit convergence in biological traits related to hydraulics and geomorphology. This convergence reflects morphological and behavioral adaptations to physical stress in streams. This study supports the existence of a habitat template for ecological strategies. Some key quantitative variables that define this habitat template can be identified by characterizing how individual organisms use their physical environment, and by using dimensionless physical variables that reveal common energetic properties in different systems. Overall, quantitative tests of community convergence are efficient tools to demonstrate that some community traits are predictable from environmental features.
Porosity and the ecology of icy satellites
NASA Technical Reports Server (NTRS)
Croft, Steven K.
1993-01-01
The case for a significant role for porosity in the structure and evolution of icy bodies in the Solar System has been difficult to establish. We present a relevant new data set and a series of structure models including a mechanical compression, not thermal creep, model for porosity that accounts satisfactorily for observed densities, moments of inertia, geologic activity, and sizes of tectonic features on icy satellites. Several types of observational data sets have been used to infer significant porosity, but until recently, alternative explanations have been preferred. Our first area of concern is the occurrence of cryovolcanism as a function of satellite radius; simple radiogenic heating models of icy satellites suggest minimum radii for melting and surface cryovolcanism to be 400 to 500 km, yet inferred melt deposits are seen on satellites half that size. One possible explanation is a deep, low conductivity regolith which lowers conductivity and raises internal temperatures, but other possibilities include tidal heating or crustal compositions of low conductivity. Our second area of concern is the occurrence and magnitude of tectonic strain; tectonic structures have been seen on icy satellites as small as Mimas and Proteus. The structures are almost exclusively extensional, with only a few possible compression Al features, and inferred global strains are on the order of 1 percent expansion. Expansions of this order in small bodies like Mimas and prevention of late compressional tectonics due to formation of ice mantles in larger bodies like Rhea are attained only in structure models including low-conductivity, and thus possibly high porosity, crusts. Thirdly, inferred moments of inertia less than 0.4 in Mimas and Tethys can be explained by high-porosity crusts, but also by differentiation of a high density core. Finally, the relatively low densities of smaller satellites like Mimas and Miranda relative to larger neighbors can be explained by deep porosity, but also by bulk compositional differences. Recent work has strengthened the case for significant porosity. Halley's nucleus was found to have a density near 0.6 g/cu cm, Janus and Epimethus were proposed to have densities near 0.7 g/cu cm, densities almost certainly due to high porosity. The irregular-spherical shape transition of icy satellites was quantitatively explained by low conductivity regoliths. A creative structure/thermal history model for Mimas simultaneously accounts quantitatively for Mimas' low density and moment of inertia by invoking initial high-porosity and subsequent compaction in the deep interior by thermal creep. The main problem with this promising model is that approximately 7 percent predicts a reduction in Mimas' radius, implying significant compressional failure and prevention of extensional tectonics, in contradiction to the observed extensional features and inferred 1 percent expansion in radius.
Pottecher, Pierre; Engelke, Klaus; Duchemin, Laure; Museyko, Oleg; Moser, Thomas; Mitton, David; Vicaut, Eric; Adams, Judith; Skalli, Wafa; Laredo, Jean Denis; Bousson, Valérie
2016-09-01
Purpose To evaluate the performance of three imaging methods (radiography, dual-energy x-ray absorptiometry [DXA], and quantitative computed tomography [CT]) and that of a numerical analysis with finite element modeling (FEM) in the prediction of failure load of the proximal femur and to identify the best densitometric or geometric predictors of hip failure load. Materials and Methods Institutional review board approval was obtained. A total of 40 pairs of excised cadaver femurs (mean patient age at time of death, 82 years ± 12 [standard deviation]) were examined with (a) radiography to measure geometric parameters (lengths, angles, and cortical thicknesses), (b) DXA (reference standard) to determine areal bone mineral densities (BMDs), and (c) quantitative CT with dedicated three-dimensional analysis software to determine volumetric BMDs and geometric parameters (neck axis length, cortical thicknesses, volumes, and moments of inertia), and (d) quantitative CT-based FEM to calculate a numerical value of failure load. The 80 femurs were fractured via mechanical testing, with random assignment of one femur from each pair to the single-limb stance configuration (hereafter, stance configuration) and assignment of the paired femur to the sideways fall configuration (hereafter, side configuration). Descriptive statistics, univariate correlations, and stepwise regression models were obtained for each imaging method and for FEM to enable us to predict failure load in both configurations. Results Statistics reported are for stance and side configurations, respectively. For radiography, the strongest correlation with mechanical failure load was obtained by using a geometric parameter combined with a cortical thickness (r(2) = 0.66, P < .001; r(2) = 0.65, P < .001). For DXA, the strongest correlation with mechanical failure load was obtained by using total BMD (r(2) = 0.73, P < .001) and trochanteric BMD (r(2) = 0.80, P < .001). For quantitative CT, in both configurations, the best model combined volumetric BMD and a moment of inertia (r(2) = 0.78, P < .001; r(2) = 0.85, P < .001). FEM explained 87% (P < .001) and 83% (P < .001) of bone strength, respectively. By combining (a) radiography and DXA and (b) quantitative CT and DXA, correlations with mechanical failure load increased to 0.82 (P < .001) and 0.84 (P < .001), respectively, for radiography and DXA and to 0.80 (P < .001) and 0.86 (P < .001) , respectively, for quantitative CT and DXA. Conclusion Quantitative CT-based FEM was the best method with which to predict the experimental failure load; however, combining quantitative CT and DXA yielded a performance as good as that attained with FEM. The quantitative CT DXA combination may be easier to use in fracture prediction, provided standardized software is developed. These findings also highlight the major influence on femoral failure load, particularly in the trochanteric region, of a densitometric parameter combined with a geometric parameter. (©) RSNA, 2016 Online supplemental material is available for this article.
Island Rule, quantitative genetics and brain–body size evolution in Homo floresiensis
2017-01-01
Colonization of islands often activate a complex chain of adaptive events that, over a relatively short evolutionary time, may drive strong shifts in body size, a pattern known as the Island Rule. It is arguably difficult to perform a direct analysis of the natural selection forces behind such a change in body size. Here, we used quantitative evolutionary genetic models, coupled with simulations and pattern-oriented modelling, to analyse the evolution of brain and body size in Homo floresiensis, a diminutive hominin species that appeared around 700 kya and survived up to relatively recent times (60–90 kya) on Flores Island, Indonesia. The hypothesis of neutral evolution was rejected in 97% of the simulations, and estimated selection gradients are within the range found in living natural populations. We showed that insularity may have triggered slightly different evolutionary trajectories for body and brain size, which means explaining the exceedingly small cranial volume of H. floresiensis requires additional selective forces acting on brain size alone. Our analyses also support previous conclusions that H. floresiensis may be most likely derived from an early Indonesian H. erectus, which is coherent with currently accepted biogeographical scenario for Homo expansion out of Africa. PMID:28637851
Oxidative dissolution of silver nanoparticles: A new theoretical approach.
Adamczyk, Zbigniew; Oćwieja, Magdalena; Mrowiec, Halina; Walas, Stanisław; Lupa, Dawid
2016-05-01
A general model of an oxidative dissolution of silver particle suspensions was developed that rigorously considers the bulk and surface solute transport. A two-step surface reaction scheme was proposed that comprises the formation of the silver oxide phase by direct oxidation and the acidic dissolution of this phase leading to silver ion release. By considering this, a complete set of equations is formulated describing oxygen and silver ion transport to and from particles' surfaces. These equations are solved in some limiting cases of nanoparticle dissolution in dilute suspensions. The obtained kinetic equations were used for the interpretation of experimental data pertinent to the dissolution kinetics of citrate-stabilized silver nanoparticles. In these kinetic measurements the role of pH and bulk suspension concentration was quantitatively evaluated by using the atomic absorption spectrometry (AAS). It was shown that the theoretical model adequately reflects the main features of the experimental results, especially the significant increase in the dissolution rate for lower pH. Also the presence of two kinetic regimes was quantitatively explained in terms of the decrease in the coverage of the fast dissolving oxide layer. The overall silver dissolution rate constants characterizing these two regimes were determined. Copyright © 2015 Elsevier Inc. All rights reserved.
Island Rule, quantitative genetics and brain-body size evolution in Homo floresiensis.
Diniz-Filho, José Alexandre Felizola; Raia, Pasquale
2017-06-28
Colonization of islands often activate a complex chain of adaptive events that, over a relatively short evolutionary time, may drive strong shifts in body size, a pattern known as the Island Rule. It is arguably difficult to perform a direct analysis of the natural selection forces behind such a change in body size. Here, we used quantitative evolutionary genetic models, coupled with simulations and pattern-oriented modelling, to analyse the evolution of brain and body size in Homo floresiensis , a diminutive hominin species that appeared around 700 kya and survived up to relatively recent times (60-90 kya) on Flores Island, Indonesia. The hypothesis of neutral evolution was rejected in 97% of the simulations, and estimated selection gradients are within the range found in living natural populations. We showed that insularity may have triggered slightly different evolutionary trajectories for body and brain size, which means explaining the exceedingly small cranial volume of H. floresiensis requires additional selective forces acting on brain size alone. Our analyses also support previous conclusions that H. floresiensis may be most likely derived from an early Indonesian H. erectus , which is coherent with currently accepted biogeographical scenario for Homo expansion out of Africa. © 2017 The Author(s).
Intelmann, Daniel; Demmer, Oliver; Desmer, Nina; Hofmann, Thomas
2009-11-25
The typical bitterness of fresh beer is well-known to decrease in intensity and to change in quality with increasing age. This phenomenon was recently shown to be caused by the conversion of bitter tasting trans-iso-alpha-acids into lingering and harsh bitter tasting tri- and tetracyclic degradation products such as tricyclocohumol, tricyclocohumene, isotricyclocohumene, tetracyclocohumol, and epitetracyclocohumol. Interestingly, the formation of these compounds was shown to be trans-specific and the corresponding cis-iso-alpha-acids were found to be comparatively stable. Application of 18O stable isotope labeling as well as quantitative model studies combined with LC-MS/MS experiments, followed by computer-based molecular dynamics simulations revealed for the first time a conclusive mechanism explaining the stereospecific transformation of trans-iso-alpha-acids into the tri- and tetracyclic degradation products. This transformation was proposed to be induced by a proton-catalyzed carbon/carbon bond formation between the carbonyl atom C(1') of the isohexenoyl moiety and the alkene carbon C(2'') of the isoprenyl moiety of the trans-iso-alpha-acids.
Zhang, Hao; Qin, Fang; Ye, Wei; Li, Zeng; Ma, Songyao; Xia, Yan; Jiang, Yi; Zhu, Jiayi; Li, Yixue; Zhang, Jian; Chen, Hai-Feng
2011-09-01
Diaryltriazine (DATA) and diarylpyrimidine (DAPY) were two category inhibitors with highly potent activity for wild type (wt) and four principal mutant types (L100I, K103N, Y181C and Y188L) of HIV-1 reverse transcriptase (RT). We had revealed the drug-resistant mechanism of DATA analogue inhibitors with molecular dynamics simulation and three-dimensional quantitative structure-activity relationship (3D-QSAR) methods. In this work, we investigated the drug-resistant mechanism of DAPY analogue inhibitors. It was found that DAPY analogue inhibitors form more hydrogen bonds and hydrophobic contacts with wild type and mutants of HIV-1 RT than DATA inhibitors. This could explain that DAPY analogue inhibitors are more potent than DATA for the wild type and mutants of HIV-1 RT. Then, 3D-QSAR models were constructed for these inhibitors of wild type and four principal mutant types HIV-1 RT and evaluated by test set compounds. These combined models can be used to design new chemical entities and make quantitative prediction of the bioactivities for HIV-1 RT inhibitors before resorting to in vitro and in vivo experiment. © 2011 John Wiley & Sons A/S.
A microscopic solution to the magnetic detwinning mystery in EuFe2As2
NASA Astrophysics Data System (ADS)
Maiwald, J.; Mazin, I. I.; Nandi, S.; Xiao, Y.; Gegenwart, P.
One of the greatest recent advances in studying nematic phenomena in Fe-based superconductors was the mechanical detwinning of the 122-family compounds. Unfortunately, these techniques generate considerable stress in the investigated samples, which contaminates the results. Recently, we observed that a minuscule magnetic field of the order of 0.1 T irreversibly and persistently detwins EuFe2As2, opening an entirely new avenue for addressing nematicity. However, further development was hindered by the absence of a microscopic theory explaining this magnetic detwinning. In fact, Eu2+ has zero orbital moment and does not couple to the lattice, and its exchange coupling with the Fe sublattice cancels by symmetry. Moreover, further increase of the field to 1 T leads to a reorientation of Fe domains, while even larger fields 10 T reorient the domains once again. We will present a new microscopic model, based on a sizable biquadratic coupling between the Fe 3 d and Eu 4 f moments. This model quantitatively explains our old and new magnetization and neutron diffraction data, thus removing the veil of mystery and finally opening the door to full-scale research into magnetic detwinning and nematicity in Fe-based superconductors.
Formation of fold-and-thrust belts on Venus by thick-skinned deformation
NASA Astrophysics Data System (ADS)
Zuber, M. T.; Parmentier, E. M.
1995-10-01
ON Venus, fold-and-thrust belts—which accommodate large-scale horizontal crustal convergence—are often located at the margins of kilometre-high plateaux1-5. Such mountain belts, typically hundreds of kilometres long and tens to hundreds of kilometres wide, surround the Lakshmi Planum plateau in the Ishtar Terra highland (Fig. 1). In explaining the origin of fold-and-thrust belts, it is important to understand the relative importance of thick-skinned deformation of the whole lithosphere and thin-skinned, large-scale overthrusting of near-surface layers. Previous quantitative analyses of mountain belts on Venus have been restricted to thin-skinned models6-8, but this style of deformation does not account for the pronounced topographic highs at the plateau edge. We propose that the long-wavelength topography of these venusian fold-and-thrust belts is more readily explained by horizontal shortening of a laterally heterogeneous lithosphere. In this thick-skinned model, deformation within the mechanically strong outer layer of Venus controls mountain building. Our results suggest that lateral variations in either the thermal or mechanical structure of the interior provide a mechanism for focusing deformation due to convergent, global-scale forces on Venus.
The physical state of finely dispersed soil-like systems with drilling sludge as an example
NASA Astrophysics Data System (ADS)
Smagin, A. V.; Kol'Tsov, I. N.; Pepelov, I. L.; Kirichenko, A. V.; Sadovnikova, N. B.; Kinzhaev, R. R.
2011-02-01
The physical state and its dynamics were studied at the quantitative level for drilling sludge (finely dispersed waste of the oil industry). Using original methodological approaches, the main hydrophysical and technological properties of sludge samples were assessed for the first time, including the water retention curve, the specific surface, the water conductivity, the electrical conductivity, the porosity dynamics during shrinkage, the water yield, etc., which are used in the current models of water transfer and the behavior of these soil-like objects under real thermodynamic conditions. The technologically unfavorable phenomenon of the spontaneous swelling of sludge during the storage of drilling waste was theoretically explained. The water regime of the homogeneous 0.5-m thick drilling sludge layer under the free gravity outflow and permanent evaporation of water from the surface was analyzed using the HYDRUS-1D model. The high water retention capacity and the low water conductivity and water yield of sludge do not allow their drying to the three-phase state (with the entry of air) acceptable for terrestrial plants under humid climatic conditions, which explains the spontaneous transformation of sludge pits to only hydromorphic ecosystems.
The Early Spectra of Eta Carinae 1892 to 1941 and the Onset of its High Excitation Emission Spectrum
NASA Astrophysics Data System (ADS)
Humphreys, Roberta M.; Davidson, Kris; Koppelman, Michael
2008-04-01
The observed behavior of η Car from 1860 to 1940 has not been considered in most recent accounts, nor has it been explained in any quantitative model. We have used modern digital processing techniques to examine Harvard objective-prism spectra made from 1892 to 1941. Relatively high excitation He I λ4471 and [Fe III] λ4658 emission, conspicuous today, were weak and perhaps absent throughout those years. Feast et al. noted this qualitative fact for other pre-1920 spectra, but we quantify it and extend it to a time only three years before Gaviola's first observations of the high-excitation features. Evidently the supply of helium-ionizing photons (λ < 504 Å) grew rapidly between 1941 and 1944. The apparent scarcity of such far-UV radiation before 1944 is difficult to explain in models that employ a hot massive secondary star, because no feasible dense wind or obscuration by dust would have hidden the photoionization caused by the proposed companion during most of its orbital period. We also discuss the qualitative near-constancy of the spectrum from 1900 to 1940, and η Car's photometric and spectroscopic transition between 1940 and 1953.
Amplitude quantification in contact-resonance-based voltage-modulated force spectroscopy
NASA Astrophysics Data System (ADS)
Bradler, Stephan; Schirmeisen, André; Roling, Bernhard
2017-08-01
Voltage-modulated force spectroscopy techniques, such as electrochemical strain microscopy and piezoresponse force microscopy, are powerful tools for characterizing electromechanical properties on the nanoscale. In order to correctly interpret the results, it is important to quantify the sample motion and to distinguish it from the electrostatic excitation of the cantilever resonance. Here, we use a detailed model to describe the cantilever dynamics in contact resonance measurements, and we compare the results with experimental values. We show how to estimate model parameters from experimental values and explain how they influence the sensitivity of the cantilever with respect to the excitation. We explain the origin of different crosstalk effects and how to identify them. We further show that different contributions to the measured signal can be distinguished by analyzing the correlation between the resonance frequency and the measured amplitude. We demonstrate this technique on two representative test samples: (i) ferroelectric periodically poled lithium niobate, and (ii) the Na+-ion conducting soda-lime float glass. We extend our analysis to higher cantilever bending modes and show that non-local electrostatic excitation is strongly reduced in higher bending modes due to the nodes in the lever shape. Based on our analyses, we present practical guidelines for quantitative imaging.
Ti diffusion in ion prebombarded MgO(100). I. A model for quantitative analysis
NASA Astrophysics Data System (ADS)
Lu, M.; Lupu, C.; Styve, V. J.; Lee, S. M.; Rabalais, J. W.
2002-01-01
Enhancement of Ti diffusion in MgO(100) prebombarded with 7 keV Ar+ has been observed. Diffusion was induced by annealing to 1000 °C following the prebombardment and Ti evaporation. Such a sample geometry and experimental procedure alleviates the continuous provision of freely mobile defects introduced by ion irradiation during annealing for diffusion, making diffusion proceed in a non-steady-state condition. Diffusion penetration profiles were obtained by using secondary ion mass spectrometry depth profiling techniques. A model that includes a depth-dependent diffusion coefficient was proposed, which successfully explains the observed non-steady-state radiation enhanced diffusion. The diffusion coefficients are of the order of 10-20 m2/s and are enhanced due to the defect structure inflected by the Ar+ prebombardment.
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.; Kurien, James; Clancy, Daniel (Technical Monitor)
2001-01-01
We present some diagnosis and control problems that are difficult to solve with discrete or purely qualitative techniques. We analyze the nature of the problems, classify them and explain why they are frequently encountered in systems with closed loop control. This paper illustrates the problem with several examples drawn from industrial and aerospace applications and presents detailed information on one important application: In-Situ Resource Utilization (ISRU) on Mars. The model for an ISRU plant is analyzed showing where qualitative techniques are inadequate to identify certain failure modes and to maintain control of the system in degraded environments. We show why the solution to the problem will result in significantly more robust and reliable control systems. Finally, we illustrate requirements for a solution to the problem by means of examples.
Toward a Better Quantitative Understanding of Polar Stratospheric Ozone Loss
NASA Technical Reports Server (NTRS)
Frieler, K.; Rex, M.; Salawitch, R. J.; Canty, T.; Streibel, M.; Stimpfle, R. M.; Pfeilsticker, K.; Dorf, M.; Weisenstein, D. K.; Godin-Beekmann, S.
2006-01-01
Previous studies have shown that observed large O3 loss rates in cold Arctic Januaries cannot be explained with current understanding of the loss processes, recommended reaction kinetics, and standard assumptions about total stratospheric chlorine and bromine. Studies based on data collected during recent field campaigns suggest faster rates of photolysis and thermal decomposition of ClOOCl and higher stratospheric bromine concentrations than previously assumed. We show that a model accounting for these kinetic changes and higher levels of BrO can largely resolve the January Arctic O3 loss problem and closely reproduces observed Arctic O3 loss while being consistent with observed levels of ClO and ClOOCl. The model also suggests that bromine catalyzed O3 loss is more important relative to chlorine catalyzed loss than previously thought.
The morpho-mechanical basis of ammonite form.
Moulton, D E; Goriely, A; Chirat, R
2015-01-07
Ammonites are a group of extinct cephalopods that garner tremendous interest over a range of scientific fields and have been a paradigm for biochronology, palaeobiology, and evolutionary theories. Their defining feature is the spiral geometry and ribbing pattern through which palaeontologists infer phylogenetic relationships and evolutionary trends. Here, we develop a morpho-mechanical model for ammonite morphogenesis. While a wealth of observations have been compiled on ammonite form, and several functional interpretations may be found, this study presents the first quantitative model to explain rib formation. Our approach, based on fundamental principles of growth and mechanics, gives a natural explanation for the morphogenesis and diversity of ribs, uncovers intrinsic laws linking ribbing and shell geometry, and provides new opportunities to interpret ammonites' and other mollusks' evolution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Resistant Behaviors by People with Alzheimer Dementia and Traumatic Brain Injury
2017-09-01
participants has completed the information for the research team to have collected quantitative data on caregiver burden and family quality of life for...those adverse behaviors. The combined qualitative, quantitative , and economic analyses will also provide pertinent information regarding the general...other achievements. Include a discussion of stated goals not met. Description shall include pertinent data and graphs in sufficient detail to explain
Modelling pollination services across agricultural landscapes
Lonsdorf, Eric; Kremen, Claire; Ricketts, Taylor; Winfree, Rachael; Williams, Neal; Greenleaf, Sarah
2009-01-01
Background and Aims Crop pollination by bees and other animals is an essential ecosystem service. Ensuring the maintenance of the service requires a full understanding of the contributions of landscape elements to pollinator populations and crop pollination. Here, the first quantitative model that predicts pollinator abundance on a landscape is described and tested. Methods Using information on pollinator nesting resources, floral resources and foraging distances, the model predicts the relative abundance of pollinators within nesting habitats. From these nesting areas, it then predicts relative abundances of pollinators on the farms requiring pollination services. Model outputs are compared with data from coffee in Costa Rica, watermelon and sunflower in California and watermelon in New Jersey–Pennsylvania (NJPA). Key Results Results from Costa Rica and California, comparing field estimates of pollinator abundance, richness or services with model estimates, are encouraging, explaining up to 80 % of variance among farms. However, the model did not predict observed pollinator abundances on NJPA, so continued model improvement and testing are necessary. The inability of the model to predict pollinator abundances in the NJPA landscape may be due to not accounting for fine-scale floral and nesting resources within the landscapes surrounding farms, rather than the logic of our model. Conclusions The importance of fine-scale resources for pollinator service delivery was supported by sensitivity analyses indicating that the model's predictions depend largely on estimates of nesting and floral resources within crops. Despite the need for more research at the finer-scale, the approach fills an important gap by providing quantitative and mechanistic model from which to evaluate policy decisions and develop land-use plans that promote pollination conservation and service delivery. PMID:19324897
Fukunaga, Tsukasa; Iwasaki, Wataru
2017-01-19
With rapid advances in genome sequencing and editing technologies, systematic and quantitative analysis of animal behavior is expected to be another key to facilitating data-driven behavioral genetics. The nematode Caenorhabditis elegans is a model organism in this field. Several video-tracking systems are available for automatically recording behavioral data for the nematode, but computational methods for analyzing these data are still under development. In this study, we applied the Gaussian mixture model-based binning method to time-series postural data for 322 C. elegans strains. We revealed that the occurrence patterns of the postural states and the transition patterns among these states have a relationship as expected, and such a relationship must be taken into account to identify strains with atypical behaviors that are different from those of wild type. Based on this observation, we identified several strains that exhibit atypical transition patterns that cannot be fully explained by their occurrence patterns of postural states. Surprisingly, we found that two simple factors-overall acceleration of postural movement and elimination of inactivity periods-explained the behavioral characteristics of strains with very atypical transition patterns; therefore, computational analysis of animal behavior must be accompanied by evaluation of the effects of these simple factors. Finally, we found that the npr-1 and npr-3 mutants have similar behavioral patterns that were not predictable by sequence homology, proving that our data-driven approach can reveal the functions of genes that have not yet been characterized. We propose that elimination of inactivity periods and overall acceleration of postural change speed can explain behavioral phenotypes of strains with very atypical postural transition patterns. Our methods and results constitute guidelines for effectively finding strains that show "truly" interesting behaviors and systematically uncovering novel gene functions by bioimage-informatic approaches.
Four Decades of β-Lactam Antibiotic Pharmacokinetics in Cystic Fibrosis.
Bulitta, Jürgen B; Jiao, Yuanyuan; Drescher, Stefanie K; Oliver, Antonio; Louie, Arnold; Moya, Bartolome; Tao, Xun; Wittau, Mathias; Tsuji, Brian T; Zavascki, Alexandre P; Shin, Beom Soo; Drusano, George L; Sörgel, Fritz; Landersdorfer, Cornelia B
2018-06-23
The pharmacokinetics (PK) of β-lactam antibiotics in cystic fibrosis (CF) patients has been compared with that in healthy volunteers for over four decades; however, no quantitative models exist that explain the PK differences between CF patients and healthy volunteers in older and newer studies. Our aims were to critically evaluate these studies and explain the PK differences between CF patients and healthy volunteers. We reviewed all 16 studies that compared the PK of β-lactams between CF patients and healthy volunteers within the same study. Analysis of covariance (ANCOVA) models were developed. In four early studies that compared adolescent, lean CF patients with adult healthy volunteers, clearance (CL) in CF divided by that in healthy volunteers was 1.72 ± 0.90 (average ± standard deviation); in four additional studies comparing age-matched (primarily adult) CF patients with healthy volunteers, this ratio was 1.46 ± 0.16. The CL ratio was 1.15 ± 0.11 in all eight studies that compared CF patients and healthy volunteers who were matched in age, body size and body composition, or that employed allometric scaling by lean body mass (LBM). Volume of distribution was similar between subject groups after scaling by body size. For highly protein-bound β-lactams, the unbound fraction was up to 2.07-fold higher in older studies that compared presumably sicker CF patients with healthy volunteers. These protein-binding differences explained over half of the variance for the CL ratio (p < 0.0001, ANCOVA). Body size, body composition and lower protein binding in presumably sicker CF patients explained the PK alterations in this population. Dosing CF patients according to LBM seems suitable to achieve antibiotic target exposures.
Comment on "Infants' perseverative search errors are induced by pragmatic misinterpretation".
Spencer, John P; Dineva, Evelina; Smith, Linda B
2009-09-25
Topál et al. (Reports, 26 September 2008, p. 1831) proposed that infants' perseverative search errors can be explained by ostensive cues from the experimenter. We use the dynamic field theory to test the proposal that infants encode locations more weakly when social cues are present. Quantitative simulations show that this account explains infants' performance without recourse to the theory of natural pedagogy.
The influence of landscape features on road development in a loess region, China.
Bi, Xiaoli; Wang, Hui; Zhou, Rui
2011-10-01
Many ecologists focus on the effects of roads on landscapes, yet few consider how landscapes affect road systems. In this study, therefore, we quantitatively evaluated how land cover, topography, and building density affected the length density, node density, spatial pattern, and location of roads in Dongzhi Yuan, a typical loess region in China. Landscape factors and roads were mapped using images from SPOT satellite (Système Probatoire d'Observation de la Terre), initiated by the French space agency and a digital elevation model (DEM). Detrended canonical correspondence analysis (DCCA), a useful ordination technique to explain species-environment relations in community ecology, was applied to evaluate the ways in which landscapes may influence roads. The results showed that both farmland area and building density were positively correlated with road variables, whereas gully density and the coefficient of variation (CV of DEM) showed negative correlations. The CV of DEM, farmland area, grassland area, and building density explained variation in node density, length density, and the spatial pattern of roads, whereas gully density and building density explained variation in variables representing road location. In addition, node density, rather than length density, was the primary road variable affected by landscape variables. The results showed that the DCCA was effective in explaining road-landscape relations. Understanding these relations can provide information for landscape managers and transportation planners.
Fundamentals and techniques of nonimaging optics for solar energy concentration
NASA Astrophysics Data System (ADS)
Winston, R.; Gallagher, J. J.
1980-05-01
The properties of a variety of new and previously known nonimaging optical configurations were investigated. A thermodynamic model which explains quantitatively the enhancement of effective absorptance of gray body receivers through cavity effects was developed. The classic method of Liu and Jordan, which allows one to predict the diffuse sunlight levels through correlation with the total and direct fraction was revised and updated and applied to predict the performance of nonimaging solar collectors. The conceptual design for an optimized solar collector which integrates the techniques of nonimaging concentration with evacuated tube collector technology was carried out and is presently the basis for a separately funded hardware development project.
Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.
2003-01-01
Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.
A Model of Human Cooperation in Social Dilemmas
Capraro, Valerio
2013-01-01
Social dilemmas are situations in which collective interests are at odds with private interests: pollution, depletion of natural resources, and intergroup conflicts, are at their core social dilemmas. Because of their multidisciplinarity and their importance, social dilemmas have been studied by economists, biologists, psychologists, sociologists, and political scientists. These studies typically explain tendency to cooperation by dividing people in proself and prosocial types, or appealing to forms of external control or, in iterated social dilemmas, to long-term strategies. But recent experiments have shown that cooperation is possible even in one-shot social dilemmas without forms of external control and the rate of cooperation typically depends on the payoffs. This makes impossible a predictive division between proself and prosocial people and proves that people have attitude to cooperation by nature. The key innovation of this article is in fact to postulate that humans have attitude to cooperation by nature and consequently they do not act a priori as single agents, as assumed by standard economic models, but they forecast how a social dilemma would evolve if they formed coalitions and then they act according to their most optimistic forecast. Formalizing this idea we propose the first predictive model of human cooperation able to organize a number of different experimental findings that are not explained by the standard model. We show also that the model makes satisfactorily accurate quantitative predictions of population average behavior in one-shot social dilemmas. PMID:24009679
Probabilistic prediction of barrier-island response to hurricanes
Plant, Nathaniel G.; Stockdon, Hilary F.
2012-01-01
Prediction of barrier-island response to hurricane attack is important for assessing the vulnerability of communities, infrastructure, habitat, and recreational assets to the impacts of storm surge, waves, and erosion. We have demonstrated that a conceptual model intended to make qualitative predictions of the type of beach response to storms (e.g., beach erosion, dune erosion, dune overwash, inundation) can be reformulated in a Bayesian network to make quantitative predictions of the morphologic response. In an application of this approach at Santa Rosa Island, FL, predicted dune-crest elevation changes in response to Hurricane Ivan explained about 20% to 30% of the observed variance. An extended Bayesian network based on the original conceptual model, which included dune elevations, storm surge, and swash, but with the addition of beach and dune widths as input variables, showed improved skill compared to the original model, explaining 70% of dune elevation change variance and about 60% of dune and shoreline position change variance. This probabilistic approach accurately represented prediction uncertainty (measured with the log likelihood ratio), and it outperformed the baseline prediction (i.e., the prior distribution based on the observations). Finally, sensitivity studies demonstrated that degrading the resolution of the Bayesian network or removing data from the calibration process reduced the skill of the predictions by 30% to 40%. The reduction in skill did not change conclusions regarding the relative importance of the input variables, and the extended model's skill always outperformed the original model.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Particle-scale CO2 adsorption kinetics modeling considering three reaction mechanisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suh, Dong-Myung; Sun, Xin
2013-09-01
In the presence of water (H2O), dry and wet adsorptions of carbon dioxide (CO2) and physical adsorption of H2O happen concurrently in a sorbent particle. The three reactions depend on each other and have a complicated, but important, effect on CO2 capturing via a solid sorbent. In this study, transport phenomena in the sorbent were modeled, including the tree reactions, and a numerical solving procedure for the model also was explained. The reaction variable distribution in the sorbent and their average values were calculated, and simulation results were compared with experimental data to validate the proposed model. Some differences, causedmore » by thermodynamic parameters, were observed between them. However, the developed model reasonably simulated the adsorption behaviors of a sorbent. The weight gained by each adsorbed species, CO2 and H2O, is difficult to determine experimentally. It is known that more CO2 can be captured in the presence of water. Still, it is not yet known quantitatively how much more CO2 the sorbent can capture, nor is it known how much dry and wet adsorptions separately account for CO2 capture. This study addresses those questions by modeling CO2 adsorption in a particle and simulating the adsorption process using the model. As adsorption temperature changed into several values, the adsorbed amount of each species was calculated. The captured CO2 in the sorbent particle was compared quantitatively between dry and wet conditions. As the adsorption temperature decreased, wet adsorption increased. However, dry adsorption was reduced.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei
In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less
Empirical evolution of a framework that supports the development of nursing competence.
Lima, Sally; Jordan, Helen L; Kinney, Sharon; Hamilton, Bridget; Newall, Fiona
2016-04-01
The aim of this study was to refine a framework for developing competence, for graduate nurses new to paediatric nursing in a transition programme. A competent healthcare workforce is essential to ensuring quality care. There are strong professional and societal expectations that nurses will be competent. Despite the importance of the topic, the most effective means through which competence develops remains elusive. A qualitative explanatory method was applied as part of a mixed methods design. Twenty-one graduate nurses taking part in a 12-month transition programme participated in semi-structured interviews between October and November 2013. Interviews were informed by data analysed during a preceding quantitative phase. Participants were provided with their quantitative results and a preliminary model for development of competence and asked to explain why their competence had developed as it had. The findings from the interviews, considered in combination with the preliminary model and quantitative results, enabled conceptualization of a Framework for Developing Competence. Key elements include: the individual in the team, identification and interpretation of standards, asking questions, guidance and engaging in endeavours, all taking place in a particular context. Much time and resources are directed at supporting the development of nursing competence, with little evidence as to the most effective means. This study led to conceptualization of a theory thought to underpin the development of nursing competence, particularly in a paediatric setting for graduate nurses. Future research should be directed at investigating the framework in other settings. © 2015 John Wiley & Sons Ltd.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Spatiotemporal evolution of erythema migrans, the hallmark rash of Lyme disease.
Vig, Dhruv K; Wolgemuth, Charles W
2014-02-04
To elucidate pathogen-host interactions during early Lyme disease, we developed a mathematical model that explains the spatiotemporal dynamics of the characteristic first sign of the disease, a large (≥5-cm diameter) rash, known as an erythema migrans. The model predicts that the bacterial replication and dissemination rates are the primary factors controlling the speed that the rash spreads, whereas the rate that active macrophages are cleared from the dermis is the principle determinant of rash morphology. In addition, the model supports the clinical observations that antibiotic treatment quickly clears spirochetes from the dermis and that the rash appearance is not indicative of the efficacy of the treatment. The quantitative agreement between our results and clinical data suggest that this model could be used to develop more efficient drug treatments and may form a basis for modeling pathogen-host interactions in other emerging infectious diseases. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Minimalist model of ice microphysics in mixed-phase stratiform clouds
NASA Astrophysics Data System (ADS)
Yang, Fan; Ovchinnikov, Mikhail; Shaw, Raymond A.
2013-07-01
The question of whether persistent ice crystal precipitation from supercooled layer clouds can be explained by time-dependent, stochastic ice nucleation is explored using an approximate, analytical model and a large-eddy simulation (LES) cloud model. The updraft velocity in the cloud defines an accumulation zone, where small ice particles cannot fall out until they are large enough, which will increase the residence time of ice particles in the cloud. Ice particles reach a quasi-steady state between growth by vapor deposition and fall speed at cloud base. The analytical model predicts that ice water content (wi) has a 2.5 power-law relationship with ice number concentration (ni). wi and ni from a LES cloud model with stochastic ice nucleation confirm the 2.5 power-law relationship, and initial indications of the scaling law are observed in data from the Indirect and Semi-Direct Aerosol Campaign. The prefactor of the power law is proportional to the ice nucleation rate and therefore provides a quantitative link to observations of ice microphysical properties.
Minimalist Model of Ice Microphysics in Mixed-phase Stratiform Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F.; Ovchinnikov, Mikhail; Shaw, Raymond A.
The question of whether persistent ice crystal precipitation from super cooled layer clouds can be explained by time-dependent, stochastic ice nucleation is explored using an approximate, analytical model, and a large-eddy simulation (LES) cloud model. The updraft velocity in the cloud defines an accumulation zone, where small ice particles cannot fall out until they are large enough, which will increase the residence time of ice particles in the cloud. Ice particles reach a quasi-steady state between growth by vapor deposition and fall speed at cloud base. The analytical model predicts that ice water content (wi) has a 2.5 power lawmore » relationship with ice number concentration ni. wi and ni from a LES cloud model with stochastic ice nucleation also confirm the 2.5 power law relationship. The prefactor of the power law is proportional to the ice nucleation rate, and therefore provides a quantitative link to observations of ice microphysical properties.« less
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
Spatial distribution of block falls using volumetric GIS-decision-tree models
NASA Astrophysics Data System (ADS)
Abdallah, C.
2010-10-01
Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).
War, space, and the evolution of Old World complex societies.
Turchin, Peter; Currie, Thomas E; Turner, Edward A L; Gavrilets, Sergey
2013-10-08
How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies-primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita.
War, space, and the evolution of Old World complex societies
Turchin, Peter; Currie, Thomas E.; Turner, Edward A. L.; Gavrilets, Sergey
2013-01-01
How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies—primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita. PMID:24062433
Positive Impacts of Modeling Instruction on Self-Efficacy
NASA Astrophysics Data System (ADS)
Sawtelle, Vashti; Brewe, Eric; Kramer, Laird H.
2010-10-01
Analysis of the impact of Modeling Instruction (MI) on the sources of self-efficacy for students in Introductory Physics 1 will be presented. We measured self-efficacy through a quantitative diagnostic (SOSESC) developed by Fencl and Scheel [1] to investigate the impact of instruction on the sources of self-efficacy in all introductory physics classes. We collected both pre- semester data and post-semester data, and evaluated the effect of the classroom by analyzing the shift (Post-Pre). At Florida International University, a Hispanic-serving institution, we find that traditional lecture classrooms negatively impact the self-efficacy of all students, while the MI courses had no impact for all students. Further, when disaggregating the data by gender and sources of self-efficacy, we find that Modeling Instruction positively impacted the Verbal Persuasion source of self-efficacy for women. This positive impact helps to explain high rates of retention for women in the MI classes.
Computational modeling of three-dimensional ECM-rigidity sensing to guide directed cell migration.
Kim, Min-Cheol; Silberberg, Yaron R; Abeyaratne, Rohan; Kamm, Roger D; Asada, H Harry
2018-01-16
Filopodia have a key role in sensing both chemical and mechanical cues in surrounding extracellular matrix (ECM). However, quantitative understanding is still missing in the filopodial mechanosensing of local ECM stiffness, resulting from dynamic interactions between filopodia and the surrounding 3D ECM fibers. Here we present a method for characterizing the stiffness of ECM that is sensed by filopodia based on the theory of elasticity and discrete ECM fiber. We have applied this method to a filopodial mechanosensing model for predicting directed cell migration toward stiffer ECM. This model provides us with a distribution of force and displacement as well as their time rate of changes near the tip of a filopodium when it is bound to the surrounding ECM fibers. Aggregating these effects in each local region of 3D ECM, we express the local ECM stiffness sensed by the cell and explain polarity in the cellular durotaxis mechanism.
Synchrony and motor mimicking in chimpanzee observational learning
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-01-01
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651
Refractive-index-matched hydrogel materials for measuring flow-structure interactions
NASA Astrophysics Data System (ADS)
Byron, Margaret L.; Variano, Evan A.
2013-02-01
In imaging-based studies of flow around solid objects, it is useful to have materials that are refractive-index-matched to the surrounding fluid. However, materials currently in use are usually rigid and matched to liquids that are either expensive or highly viscous. This does not allow for measurements at high Reynolds number, nor accurate modeling of flexible structures. This work explores the use of two hydrogels (agarose and polyacrylamide) as refractive-index-matched models in water. These hydrogels are inexpensive, can be cast into desired shapes, and have flexibility that can be tuned to match biological materials. The use of water as the fluid phase allows this method to be implemented immediately in many experimental facilities and permits investigation of high-Reynolds-number phenomena. We explain fabrication methods and present a summary of the physical and optical properties of both gels, and then show measurements demonstrating the use of hydrogel models in quantitative imaging.
a Migration Well Model for the Binding of Ligands to Heme Proteins.
NASA Astrophysics Data System (ADS)
Beece, Daniel Kenneth
The binding of carbon monoxide and dioxygen to heme proteins can be viewed as occurring in distinct stages: diffusion in the solvent, migration through the matrix, and occupation of the pocket before the final binding step. A model is presented which can explain the dominant kinetic behavior of several different heme protein-ligand systems. The model assumes that a ligand molecule in the solvent sequentially encounters discrete energy barriers on the way to the binding site. The rate to surmount each barrier is distributed, except for the pseudofirst order rate corresponding to the step into the protein from the solvent. The migration through the matrix is equivalent to a small number of distinct jumps. Quantitative analysis of the data permit estimates of the barrier heights, preexponentials and solvent coupling factors for each rate. A migration coefficient and a matrix occupation factor are defined.
Advanced diffusion MRI and biomarkers in the central nervous system: a new approach.
Martín Noguerol, T; Martínez Barbero, J P
The introduction of diffusion-weighted sequences has revolutionized the detection and characterization of central nervous system (CNS) disease. Nevertheless, the assessment of diffusion studies of the CNS is often limited to qualitative estimation. Moreover, the pathophysiological complexity of the different entities that affect the CNS cannot always be correctly explained through classical models. The development of new models for the analysis of diffusion sequences provides numerous parameters that enable a quantitative approach to both diagnosis and prognosis as well as to monitoring the response to treatment; these parameters can be considered potential biomarkers of health and disease. In this update, we review the physical bases underlying diffusion studies and diffusion tensor imaging, advanced models for their analysis (intravoxel coherent motion and kurtosis), and the biological significance of the parameters derived. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Martinez-Fiestas, Myriam; Rodríguez-Garzón, Ignacio; Delgado-Padial, Antonio; Lucas-Ruiz, Valeriano
2017-09-01
This article presents a cross-cultural study on perceived risk in the construction industry. Worker samples from three different countries were studied: Spain, Peru and Nicaragua. The main goal was to explain how construction workers perceive their occupational hazard and to analyze how this is related to their national culture. The model used to measure perceived risk was the psychometric paradigm. The results show three very similar profiles, indicating that risk perception is independent of nationality. A cultural analysis was conducted using the Hofstede model. The results of this analysis and the relation to perceived risk showed that risk perception in construction is independent of national culture. Finally, a multiple lineal regression analysis was conducted to determine what qualitative attributes could predict the global quantitative size of risk perception. All of the findings have important implications regarding the management of safety in the workplace.
Modelling the elements of country vulnerability to earthquake disasters.
Asef, M R
2008-09-01
Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.
A study of stiffness, residual strength and fatigue life relationships for composite laminates
NASA Technical Reports Server (NTRS)
Ryder, J. T.; Crossman, F. W.
1983-01-01
Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Stochastic Simulation of Actin Dynamics Reveals the Role of Annealing and Fragmentation
Fass, Joseph; Pak, Chi; Bamburg, James; Mogilner, Alex
2008-01-01
Recent observations of F-actin dynamics call for theoretical models to interpret and understand the quantitative data. A number of existing models rely on simplifications and do not take into account F-actin fragmentation and annealing. We use Gillespie’s algorithm for stochastic simulations of the F-actin dynamics including fragmentation and annealing. The simulations vividly illustrate that fragmentation and annealing have little influence on the shape of the polymerization curve and on nucleotide profiles within filaments but drastically affect the F-actin length distribution, making it exponential. We find that recent surprising measurements of high length diffusivity at the critical concentration cannot be explained by fragmentation and annealing events unless both fragmentation rates and frequency of undetected fragmentation and annealing events are greater than previously thought. The simulations compare well with experimentally measured actin polymerization data and lend additional support to a number of existing theoretical models. PMID:18279896
Synchrony and motor mimicking in chimpanzee observational learning.
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-06-13
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.
NASA Astrophysics Data System (ADS)
Toner, John; Tu, Yu-Hai
2002-05-01
We have developed a new continuum dynamical model for the collective motion of large "flocks" of biological organisms (e.g., flocks of birds, schools of fish, herds of wildebeest, hordes of bacteria, slime molds, etc.) . This model does for flocks what the Navier-Stokes equation does for fluids. The model predicts that, unlike simple fluids, flocks show huge fluctuation effects in spatial dimensions d < 4 that radically change their behavior. In d=2, it is only these effects that make it possible for the flock to move coherently at all. This explains why a million wildebeest can march together across the Serengeti plain, despite the fact that a million physicists gathered on the same plane could NOT all POINT in the same direction. Detailed quantitative predictions of this theory agree beautifully with computer simulations of flock motion.
NASA Astrophysics Data System (ADS)
Li, Peizhen; Tian, Yueli; Zhai, Honglin; Deng, Fangfang; Xie, Meihong; Zhang, Xiaoyun
2013-11-01
Non-purine derivatives have been shown to be promising novel drug candidates as xanthine oxidase inhibitors. Based on three-dimensional quantitative structure-activity relationship (3D-QSAR) methods including comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), two 3D-QSAR models for a series of non-purine xanthine oxidase (XO) inhibitors were established, and their reliability was supported by statistical parameters. Combined 3D-QSAR modeling and the results of molecular docking between non-purine xanthine oxidase inhibitors and XO, the main factors that influenced activity of inhibitors were investigated, and the obtained results could explain known experimental facts. Furthermore, several new potential inhibitors with higher activity predicted were designed, which based on our analyses, and were supported by the simulation of molecular docking. This study provided some useful information for the development of non-purine xanthine oxidase inhibitors with novel structures.
NASA Astrophysics Data System (ADS)
Liu, L.; Hu, J.; Zhou, Q.
2016-12-01
The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures also emerge from these models.
Bones, D L; Gerding, M; Höffner, J; Martín, Juan Carlos Gómez; Plane, J M C
2016-12-28
The dissociative recombination of CaO + ions with electrons has been studied in a flowing afterglow reactor. CaO + was generated by the pulsed laser ablation of a Ca target, followed by entrainment in an Ar + ion/electron plasma. A kinetic model describing the gas-phase chemistry and diffusion to the reactor walls was fitted to the experimental data, yielding a rate coefficient of (3.0 ± 1.0) × 10 -7 cm 3 molecule -1 s -1 at 295 K. This result has two atmospheric implications. First, the surprising observation that the Ca + /Fe + ratio is ~8 times larger than Ca/Fe between 90 and 100 km in the atmosphere can now be explained quantitatively by the known ion-molecule chemistry of these two metals. Second, the rate of neutralization of Ca + ions in a descending sporadic E layer is fast enough to explain the often explosive growth of sporadic neutral Ca layers.
Secular perihelion advances of the inner planets and asteroid Icarus
NASA Astrophysics Data System (ADS)
Wilhelm, Klaus; Dwivedi, Bhola N.
2014-08-01
A small effect expected from a recently proposed gravitational impact model (Wilhelm et al., 2013) is used to explain the remaining secular perihelion advance rates of the planets Mercury, Venus, Earth, Mars, and the asteroid (1566) Icarus-after taking into account the disturbances related to Newton’s Theory of Gravity. Such a rate was discovered by Le Verrier (1859) for Mercury and calculated by Einstein (1915, 1916) in the framework of his General Theory of Relativity (GTR). Accurate observations are now available for the inner Solar System objects with different orbital parameters. This is important, because it allowed us to demonstrate that the quantitative amount of the deviation from an 1/r potential is-under certain conditions-only dependent on the specific mass distribution of the Sun and not on the characteristics of the orbiting objects and their orbits. A displacement of the effective gravitational from the geometric centre of the Sun by about 4400 m towards each object is consistent with the observations and explains the secular perihelion advance rates.
Spin-wave resonances and surface spin pinning in Ga1-xMnxAs thin films
NASA Astrophysics Data System (ADS)
Bihler, C.; Schoch, W.; Limmer, W.; Goennenwein, S. T. B.; Brandt, M. S.
2009-01-01
We investigate the dependence of the spin-wave resonance (SWR) spectra of Ga0.95Mn0.05As thin films on the sample treatment. We find that for the external magnetic field perpendicular to the film plane, the SWR spectrum of the as-grown thin films and the changes upon etching and short-term hydrogenation can be quantitatively explained via a linear gradient in the uniaxial magnetic anisotropy field in growth direction. The model also qualitatively explains the SWR spectra observed for the in-plane easy-axis orientation of the external magnetic field. Furthermore, we observe a change in the effective surface spin pinning of the partially hydrogenated sample, which results from the tail in the hydrogen-diffusion profile. The latter leads to a rapidly changing hole concentration/magnetic anisotropy profile acting as a barrier for the spin-wave excitations. Therefore, short-term hydrogenation constitutes a simple method to efficiently manipulate the surface spin pinning.
The role of spatial dynamics in modulating metabolic interactions in biofilm development
NASA Astrophysics Data System (ADS)
Bocci, Federico; Lu, Mingyang; Suzuki, Yoko; Onuchic, Jose
Cell phenotypic expression is substantially affected by the presence of environmental stresses and cell-cell communication mechanisms. We study the metabolic interactions of the glutamate synthesis pathway to explain the oscillation of growth rate observed in a B. Subtilis colony. Previous modelling schemes had failed in fully reproducing quantitative experimental observations as they did not explicitly address neither the diffusion of small metabolites nor the spatial distribution of phenotypically distinct bacteria inside the colony. We introduce a continuous space-temporal framework to explain how biofilm development dynamics is influenced by the metabolic interplay between two bacterial phenotypes composing the interior and the peripheral layer of the biofilm. Growth oscillations endorse the preservation of a high level of nutrients in the interior through diffusion and colony expansion in the periphery altogether. Our findings point out that perturbations of environmental conditions can result in the interruption of the interplay between cell populations and advocate alternative approaches to biofilm control strategies.
Chemical weathering as a mechanism for the climatic control of bedrock river incision
NASA Astrophysics Data System (ADS)
Murphy, Brendan P.; Johnson, Joel P. L.; Gasparini, Nicole M.; Sklar, Leonard S.
2016-04-01
Feedbacks between climate, erosion and tectonics influence the rates of chemical weathering reactions, which can consume atmospheric CO2 and modulate global climate. However, quantitative predictions for the coupling of these feedbacks are limited because the specific mechanisms by which climate controls erosion are poorly understood. Here we show that climate-dependent chemical weathering controls the erodibility of bedrock-floored rivers across a rainfall gradient on the Big Island of Hawai‘i. Field data demonstrate that the physical strength of bedrock in streambeds varies with the degree of chemical weathering, which increases systematically with local rainfall rate. We find that incorporating the quantified relationships between local rainfall and erodibility into a commonly used river incision model is necessary to predict the rates and patterns of downcutting of these rivers. In contrast to using only precipitation-dependent river discharge to explain the climatic control of bedrock river incision, the mechanism of chemical weathering can explain strong coupling between local climate and river incision.
Redshift and blueshift of GaNAs/GaAs multiple quantum wells induced by rapid thermal annealing
NASA Astrophysics Data System (ADS)
Sun, Yijun; Cheng, Zhiyuan; Zhou, Qiang; Sun, Ying; Sun, Jiabao; Liu, Yanhua; Wang, Meifang; Cao, Zhen; Ye, Zhi; Xu, Mingsheng; Ding, Yong; Chen, Peng; Heuken, Michael; Egawa, Takashi
2018-02-01
The effects of rapid thermal annealing (RTA) on the optical properties of GaNAs/GaAs multiple quantum wells (MQWs) grown by chemical beam epitaxy (CBE) are studied by photoluminescence (PL) at 77 K. The results show that the optical quality of the MQWs improves significantly after RTA. With increasing RTA temperature, PL peak energy of the MQWs redshifts below 1023 K, while it blueshifts above 1023 K. Two competitive processes which occur simultaneously during RTA result in redshift at low temperature and blueshift at high temperature. It is also found that PL peak energy shift can be explained neither by nitrogen diffusion out of quantum wells nor by nitrogen reorganization inside quantum wells. PL peak energy shift can be quantitatively explained by a modified recombination coupling model in which redshift nonradiative recombination and blueshift nonradiative recombination coexist. The results obtained have significant implication on the growth and RTA of GaNAs material for high performance optoelectronic device application.
Guo, Hailin; Ding, Wanwen; Chen, Jingbo; Chen, Xuan; Zheng, Yiqi; Wang, Zhiyong; Liu, Jianxiu
2014-01-01
Zoysiagrass (Zoysia Willd.) is an important warm season turfgrass that is grown in many parts of the world. Salt tolerance is an important trait in zoysiagrass breeding programs. In this study, a genetic linkage map was constructed using sequence-related amplified polymorphism markers and random amplified polymorphic DNA markers based on an F1 population comprising 120 progeny derived from a cross between Zoysia japonica Z105 (salt-tolerant accession) and Z061 (salt-sensitive accession). The linkage map covered 1211 cM with an average marker distance of 5.0 cM and contained 24 linkage groups with 242 marker loci (217 sequence-related amplified polymorphism markers and 25 random amplified polymorphic DNA markers). Quantitative trait loci affecting the salt tolerance of zoysiagrass were identified using the constructed genetic linkage map. Two significant quantitative trait loci (qLF-1 and qLF-2) for leaf firing percentage were detected; qLF-1 at 36.3 cM on linkage group LG4 with a logarithm of odds value of 3.27, which explained 13.1% of the total variation of leaf firing and qLF-2 at 42.3 cM on LG5 with a logarithm of odds value of 2.88, which explained 29.7% of the total variation of leaf firing. A significant quantitative trait locus (qSCW-1) for reduced percentage of dry shoot clipping weight was detected at 44.1 cM on LG5 with a logarithm of odds value of 4.0, which explained 65.6% of the total variation. This study provides important information for further functional analysis of salt-tolerance genes in zoysiagrass. Molecular markers linked with quantitative trait loci for salt tolerance will be useful in zoysiagrass breeding programs using marker-assisted selection.
Effective model with strong Kitaev interactions for α -RuCl3
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi; Suga, Sei-ichiro
2018-04-01
We use an exact numerical diagonalization method to calculate the dynamical spin structure factors of three ab initio models and one ab initio guided model for a honeycomb-lattice magnet α -RuCl3 . We also use thermal pure quantum states to calculate the temperature dependence of the heat capacity, the nearest-neighbor spin-spin correlation function, and the static spin structure factor. From the results obtained from these four effective models, we find that, even when the magnetic order is stabilized at low temperature, the intensity at the Γ point in the dynamical spin structure factors increases with increasing nearest-neighbor spin correlation. In addition, we find that the four models fail to explain heat-capacity measurements whereas two of the four models succeed in explaining inelastic-neutron-scattering experiments. In the four models, when temperature decreases, the heat capacity shows a prominent peak at a high temperature where the nearest-neighbor spin-spin correlation function increases. However, the peak temperature in heat capacity is too low in comparison with that observed experimentally. To address these discrepancies, we propose an effective model that includes strong ferromagnetic Kitaev coupling, and we show that this model quantitatively reproduces both inelastic-neutron-scattering experiments and heat-capacity measurements. To further examine the adequacy of the proposed model, we calculate the field dependence of the polarized terahertz spectra, which reproduces the experimental results: the spin-gapped excitation survives up to an onset field where the magnetic order disappears and the response in the high-field region is almost linear. Based on these numerical results, we argue that the low-energy magnetic excitation in α -RuCl3 is mainly characterized by interactions such as off-diagonal interactions and weak Heisenberg interactions between nearest-neighbor pairs, rather than by the strong Kitaev interactions.
A permeation theory for single-file ion channels: one- and two-step models.
Nelson, Peter Hugo
2011-04-28
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.
Development of a dynamic computational model of social cognitive theory.
Riley, William T; Martin, Cesar A; Rivera, Daniel E; Hekler, Eric B; Adams, Marc A; Buman, Matthew P; Pavel, Misha; King, Abby C
2016-12-01
Social cognitive theory (SCT) is among the most influential theories of behavior change and has been used as the conceptual basis of health behavior interventions for smoking cessation, weight management, and other health behaviors. SCT and other behavior theories were developed primarily to explain differences between individuals, but explanatory theories of within-person behavioral variability are increasingly needed as new technologies allow for intensive longitudinal measures and interventions adapted from these inputs. These within-person explanatory theoretical applications can be modeled as dynamical systems. SCT constructs, such as reciprocal determinism, are inherently dynamical in nature, but SCT has not been modeled as a dynamical system. This paper describes the development of a dynamical system model of SCT using fluid analogies and control systems principles drawn from engineering. Simulations of this model were performed to assess if the model performed as predicted based on theory and empirical studies of SCT. This initial model generates precise and testable quantitative predictions for future intensive longitudinal research. Dynamic modeling approaches provide a rigorous method for advancing health behavior theory development and refinement and for guiding the development of more potent and efficient interventions.
Molecular Modeling of Lipid Membrane Curvature Induction by a Peptide: More than Simply Shape
Sodt, Alexander J.; Pastor, Richard W.
2014-01-01
Molecular dynamics simulations of an amphipathic helix embedded in a lipid bilayer indicate that it will induce substantial positive curvature (e.g., a tube of diameter 20 nm at 16% surface coverage). The induction is twice that of a continuum model prediction that only considers the shape of the inclusion. The discrepancy is explained in terms of the additional presence of specific interactions described only by the molecular model. The conclusion that molecular shape alone is insufficient to quantitatively model curvature is supported by contrasting molecular and continuum models of lipids with large and small headgroups (choline and ethanolamine, respectively), and of the removal of a lipid tail (modeling a lyso-lipid). For the molecular model, curvature propensity is analyzed by computing the derivative of the free energy with respect to bending. The continuum model predicts that the inclusion will soften the bilayer near the headgroup region, an effect that may weaken curvature induction. The all-atom predictions are consistent with experimental observations of the degree of tubulation by amphipathic helices and variation of the free energy of binding to liposomes. PMID:24806928
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
A network model of successive partitioning-limited solute diffusion through the stratum corneum.
Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon
2010-02-07
As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.
NASA Astrophysics Data System (ADS)
Katsura, Tomoo; Baba, Kiyoshi; Yoshino, Takashi; Kogiso, Tetsu
2017-10-01
We review the currently available results of laboratory experiments, geochemistry and MT observations and attempt to explain the conductivity structures in the oceanic asthenosphere by constructing mineral-physics models for the depleted mid-oceanic ridge basalt (MORB) mantle (DMM) and volatile-enriched plume mantle (EM) along the normal and plume geotherms. The hopping and ionic conductivity of olivine has a large temperature dependence, whereas the proton conductivity has a smaller dependence. The contribution of proton conduction is small in DMM. Melt conductivity is enhanced by the H2O and CO2 components. The effects of incipient melts with high volatile components on bulk conductivity are significant. The low solidus temperatures of the hydrous carbonated peridotite produce incipient melts in the asthenosphere, which strongly increase conductivity around 100 km depth under older plates. DMM has a conductivity of 10- 1.2 - 1.5 S/m at 100-300 km depth, regardless of the plate age. Plume mantle should have much higher conductivity than normal mantle, due to its high volatile content and high temperatures. The MT observations of the oceanic asthenosphere show a relatively uniform conductivity at 200-300 km depth, consistent with the mineral-physics model. On the other hand, the MT observations show large lateral variations in shallow parts of the asthenosphere despite similar tectonic settings and close locations. Such variations are difficult to explain with the mineral-physics model. High conductivity layers (HCL), which are associated with anisotropy in the direction of the plate motion, have only been observed in the asthenosphere under infant or young plates, but they are not ubiquitous in the oceanic asthenosphere. Although the general features of HCL imply their high-temperature melting origin, the mineral-physics model cannot explain them quantitatively. Much lower conductivity under hotspots, compared with the model plume-mantle conductivity suggests the extraction of volatiles from the plume mantle by the ocean island basalt (OIB) magmatism.
Zunhammer, Matthias; Eberle, Hanna; Eichhammer, Peter; Busch, Volker
2013-01-01
The etiology of somatization is incompletely understood, but could be elucidated by models of psychosocial stress. Academic exam stress has effectively been applied as a naturalistic stress model, however its effect on somatization symptoms according to ICD-10 and DSM-IV criteria has not been reported so far. Baseline associations between somatization and personality traits, such as alexithymia, have been studied exhaustively. Nevertheless, it is largely unknown if personality traits have an explanatory value for stress induced somatization. This longitudinal, quasi-experimental study assessed the effects of university exams on somatization - and the reversal of effects after an exam-free period. Repeated-observations were obtained within 150 students, measuring symptom intensity before, during and after an exam period, according to the Screening for Somatoform Symptoms 7-day (SOMS-7d). Additionally, self-reports on health status were used to differentiate between medically explained and medically unexplained symptoms. Alexithymia, neuroticism, trait-anxiety and baseline depression were surveyed using the Toronto-Alexithymia Scale (TAS-20), the Big-Five Personality Interview (NEO-FFI), the State Trait Anxiety Inventory (STAI) and Beck's Depression Inventory (BDI-II). These traits were competitively tested for their ability to explain somatization increases under exam stress. Somatization significantly increased across a wide range of symptoms under exam stress, while health reports pointed towards a reduction in acute infections and injuries. Neuroticism, alexithymia, trait anxiety and depression explained variance in somatization at baseline, but only neuroticism was associated with symptom increases under exam stress. Exam stress is an effective psychosocial stress model inducing somatization. A comprehensive quantitative description of bodily symptoms under exam stress is supplied. The results do not support the stress-alexithymia hypothesis, but favor neuroticism as a personality trait of importance for somatization.
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ergazaki, Marida; Alexaki, Aspa; Papadopoulou, Chrysa; Kalpakiori, Marieleni
2014-02-01
This paper aims at exploring (a) whether preschoolers recognize that offspring share physical traits with their parents due to birth and behavioural ones due to nurture, and (b) whether they seem ready to explain shared physical traits with a `pre-biological' causal model that includes the contribution of both parents and a rudimentary notion of genes. This exploration is supposed to provide evidence for our next step, which is the development of an early years' learning environment about inheritance. Conducting individual, semi-structured interviews with 90 preschoolers (age 4.5-5.5) of four public kindergartens in Patras, we attempted to trace their reasoning about (a) whether and why offspring share physical and behavioural traits with parents and (b) which mechanism could better explain the shared physical traits. The probes were a modified six-case version of Solomon et al. (Child Dev 67:151-171, 1996) `adoption task, as well as a three-case task based on Springer's (Child Dev 66:547-558, 1995) `mechanism task' and on Solomon and Johnson's (Br J Dev Psychol 18(1):81-96, 2000) idea of genes as a `conceptual placeholder'. The qualitative and quantitative analysis of the interviews showed overlapping reasoning about the origin of physical and behavioural family resemblance. Nevertheless, we did trace the `birth-driven' argument for the attribution of the offspring's physical traits to the biological parents, as well as a preference for the `pre-biological' model that introduces a rudimentary idea of genes in order to explain shared physical traits between parents and offspring. The findings of the study and the educational implications are thoroughly discussed.
Zunhammer, Matthias; Eberle, Hanna; Eichhammer, Peter; Busch, Volker
2013-01-01
Objective The etiology of somatization is incompletely understood, but could be elucidated by models of psychosocial stress. Academic exam stress has effectively been applied as a naturalistic stress model, however its effect on somatization symptoms according to ICD-10 and DSM-IV criteria has not been reported so far. Baseline associations between somatization and personality traits, such as alexithymia, have been studied exhaustively. Nevertheless, it is largely unknown if personality traits have an explanatory value for stress induced somatization. Methods This longitudinal, quasi-experimental study assessed the effects of university exams on somatization — and the reversal of effects after an exam-free period. Repeated-observations were obtained within 150 students, measuring symptom intensity before, during and after an exam period, according to the Screening for Somatoform Symptoms 7-day (SOMS-7d). Additionally, self-reports on health status were used to differentiate between medically explained and medically unexplained symptoms. Alexithymia, neuroticism, trait-anxiety and baseline depression were surveyed using the Toronto-Alexithymia Scale (TAS-20), the Big-Five Personality Interview (NEO-FFI), the State Trait Anxiety Inventory (STAI) and Beck’s Depression Inventory (BDI-II). These traits were competitively tested for their ability to explain somatization increases under exam stress. Results Somatization significantly increased across a wide range of symptoms under exam stress, while health reports pointed towards a reduction in acute infections and injuries. Neuroticism, alexithymia, trait anxiety and depression explained variance in somatization at baseline, but only neuroticism was associated with symptom increases under exam stress. Conclusion Exam stress is an effective psychosocial stress model inducing somatization. A comprehensive quantitative description of bodily symptoms under exam stress is supplied. The results do not support the stress-alexithymia hypothesis, but favor neuroticism as a personality trait of importance for somatization. PMID:24367700
Effects of land-use and climate on Holocene vegetation composition in northern Europe
NASA Astrophysics Data System (ADS)
Marquer, Laurent; Gaillard, Marie-José; Sugita, Shinya; Poska, Anneli; Trondman, Anna-Kari; Mazier, Florence; Nielsen, Anne Birgitte; Fyfe, Ralph; Jönsson, Anna Maria
2016-04-01
Prior to the advent of agriculture, broad-scale vegetation patterns in Europe were controlled primarily by climate. Early agriculture can be detected in palaeovegetation records, but the relative extent to which past regional vegetation was climatically or anthropogenically-forced is of current scientific interest. Using comparisons of transformed pollen data, climate-model data, dynamic vegetation model simulations and anthropogenic land-cover change data, this study aims to estimate the relative impacts of human activities and climate on the Holocene vegetation composition of northern Europe at a subcontinental scale. The REVEALS model was used for pollen-based quantitative reconstruction of vegetation (RV). Climate variables from ECHAM and the extent of human deforestation from KK10 were used as explanatory variables to evaluate their respective impacts on RV. Indices of vegetation-composition changes based on RV and climate-induced vegetation simulated by the LPJ-GUESS model (LPJG) were used to assess the relative importance of climate and anthropogenic impacts. The results show that climate is the major predictor of Holocene vegetation changes until 5000 years ago. The similarity in rate of change and turnover between RV and LPJG decreases after this time. Changes in RV explained by climate and KK10 vary for the last 2000 years; the similarity in rate of change, turnover, and evenness between RV and LPJG decreases to the present. The main conclusions provide important insights on Neolithic forest clearances that affected regional vegetation from 6700 years ago, although climate (temperature and precipitation) still was a major driver of vegetation change (explains 37% of the variation) at the subcontinental scale. Land use became more important around 5000-4000 years ago, while the influence of climate decreased (explains 28% of the variation). Land-use affects all indices of vegetation compositional change during the last 2000 years; the influence of climate on vegetation, although reduced, remains at 16% until modern time while land-use explains 7%, which underlines that North-European vegetation is still climatically sensitive and, therefore, responds strongly to ongoing climate change.
Towards in vivo focal cortical dysplasia phenotyping using quantitative MRI.
Adler, Sophie; Lorio, Sara; Jacques, Thomas S; Benova, Barbora; Gunny, Roxana; Cross, J Helen; Baldeweg, Torsten; Carmichael, David W
2017-01-01
Focal cortical dysplasias (FCDs) are a range of malformations of cortical development each with specific histopathological features. Conventional radiological assessment of standard structural MRI is useful for the localization of lesions but is unable to accurately predict the histopathological features. Quantitative MRI offers the possibility to probe tissue biophysical properties in vivo and may bridge the gap between radiological assessment and ex-vivo histology. This review will cover histological, genetic and radiological features of FCD following the ILAE classification and will explain how quantitative voxel- and surface-based techniques can characterise these features. We will provide an overview of the quantitative MRI measures available, their link with biophysical properties and finally the potential application of quantitative MRI to the problem of FCD subtyping. Future research linking quantitative MRI to FCD histological properties should improve clinical protocols, allow better characterisation of lesions in vivo and tailored surgical planning to the individual.
Searching for Chips of Kuiper Belt Objects in Meteorites
NASA Technical Reports Server (NTRS)
Zolensky, M. E.; Ohsumi, K.; Briani, G.; Gounelle, M.; Mikouchi, T.; Satake, W.; Kurihara, T.; Weisberg, M. K.; Le, L.
2009-01-01
The Nice model [1&2] describes a scenario whereby the Jovian planets experienced a violent reshuffling event approx.3:9 Ga the giant planets moved, existing small body reservoirs were depleted or eliminated, and new reservoirs were created in particular locations. The Nice model quantitatively explains the orbits of the Jovian planets and Neptune [1], the orbits of bodies in several different small body reservoirs in the outer solar system (e.g., Trojans of Jupiter [2], the Kuiper belt and scattered disk [3], the irregular satellites of the giant planets [4], and the late heavy bombardment on the terrestrial planets approx.3:9 Ga [5]. This model is unique in plausibly explaining all of these phenomena. One issue with the Nice model is that it predicts that transported Kuiper Belt Objects (KBOs) (things looking like D class asteroids) should predominate in the outer asteroid belt, but we know only about 10% of the objects in the outer main asteroid belt appear to be D-class objects [6]. However based upon collisional modeling, Bottke et al. [6] argue that more than 90% of the objects captured in the outer main belt could have been eliminated by impacts if they had been weakly-indurated objects. These disrupted objects should have left behind pieces in the ancient regoliths of other, presumably stronger asteroids. Thus, a derived prediction of the Nice model is that ancient regolith samples (regolith-bearing meteorites) should contain fragments of collisionally-destroyed Kuiper belt objects. In fact KBO pieces might be expected to be present in most ancient regolith- bearing meteorites [7&8].
Mechanistic Understanding of Microbial Plugging for Improved Sweep Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven Bryant; Larry Britton
2008-09-30
Microbial plugging has been proposed as an effective low cost method of permeability reduction. Yet there is a dearth of information on the fundamental processes of microbial growth in porous media, and there are no suitable data to model the process of microbial plugging as it relates to sweep efficiency. To optimize the field implementation, better mechanistic and volumetric understanding of biofilm growth within a porous medium is needed. In particular, the engineering design hinges upon a quantitative relationship between amount of nutrient consumption, amount of growth, and degree of permeability reduction. In this project experiments were conducted to obtainmore » new data to elucidate this relationship. Experiments in heterogeneous (layered) beadpacks showed that microbes could grow preferentially in the high permeability layer. Ultimately this caused flow to be equally divided between high and low permeability layers, precisely the behavior needed for MEOR. Remarkably, classical models of microbial nutrient uptake in batch experiments do not explain the nutrient consumption by the same microbes in flow experiments. We propose a simple extension of classical kinetics to account for the self-limiting consumption of nutrient observed in our experiments, and we outline a modeling approach based on architecture and behavior of biofilms. Such a model would account for the changing trend of nutrient consumption by bacteria with the increasing biomass and the onset of biofilm formation. However no existing model can explain the microbial preference for growth in high permeability regions, nor is there any obvious extension of the model for this observation. An attractive conjecture is that quorum sensing is involved in the heterogeneous bead packs.« less
A semantic web framework to integrate cancer omics data with biological knowledge
2012-01-01
Background The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. Results For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. Conclusions We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily. PMID:22373303
Zhang, Yuxuan; Chandran, K.S. Ravi; Jagannathan, M.; ...
2016-12-05
Li-Mg alloys are promising as positive electrodes (anodes) for Li-ion batteries due to the high Li storage capacity and the relatively lower volume change during the lithiation/delithiation process. They also present a unique opportunity to image the Li distribution through the electrode thickness at various delithiation states. In this work, spatial distributions of Li in electrochemically delithiated Li-Mg alloy electrodes have been quantitatively determined using neutron tomography. Specifically, the Li concentration profiles along thickness direction are determined. A rigorous analytical model to quantify the diffusion-controlled delithiation, accompanied by phase transition and boundary movement, has also been developed to explain themore » delithiation mechanism. The analytical modeling scheme successfully predicted the Li concentration profiles which agreed well with the experimental data. It is demonstrated that during discharge Li is removed by diffusion through the solid solution Li-Mg phases and this proceeds with β→α phase transition and the associated phase boundary movement through the thickness of the electrode. This is also accompanied by electrode thinning due to the change in molar volume during delithiation. In conclusion, following the approaches developed here, one can develop a rigorous and quantitative understanding of electrochemical delithiation in electrodes of electrochemical cells, similar to that in the present Li-Mg electrodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuxuan; Chandran, K.S. Ravi; Jagannathan, M.
Li-Mg alloys are promising as positive electrodes (anodes) for Li-ion batteries due to the high Li storage capacity and the relatively lower volume change during the lithiation/delithiation process. They also present a unique opportunity to image the Li distribution through the electrode thickness at various delithiation states. In this work, spatial distributions of Li in electrochemically delithiated Li-Mg alloy electrodes have been quantitatively determined using neutron tomography. Specifically, the Li concentration profiles along thickness direction are determined. A rigorous analytical model to quantify the diffusion-controlled delithiation, accompanied by phase transition and boundary movement, has also been developed to explain themore » delithiation mechanism. The analytical modeling scheme successfully predicted the Li concentration profiles which agreed well with the experimental data. It is demonstrated that during discharge Li is removed by diffusion through the solid solution Li-Mg phases and this proceeds with β→α phase transition and the associated phase boundary movement through the thickness of the electrode. This is also accompanied by electrode thinning due to the change in molar volume during delithiation. In conclusion, following the approaches developed here, one can develop a rigorous and quantitative understanding of electrochemical delithiation in electrodes of electrochemical cells, similar to that in the present Li-Mg electrodes.« less
Hu, Bo; Tu, Yuhai
2013-07-02
It is essential for bacteria to find optimal conditions for their growth and survival. The optimal levels of certain environmental factors (such as pH and temperature) often correspond to some intermediate points of the respective gradients. This requires the ability of bacteria to navigate from both directions toward the optimum location and is distinct from the conventional unidirectional chemotactic strategy. Remarkably, Escherichia coli cells can perform such a precision sensing task in pH taxis by using the same chemotaxis machinery, but with opposite pH responses from two different chemoreceptors (Tar and Tsr). To understand bacterial pH sensing, we developed an Ising-type model for a mixed cluster of opposing receptors based on the push-pull mechanism. Our model can quantitatively explain experimental observations in pH taxis for various mutants and wild-type cells. We show how the preferred pH level depends on the relative abundance of the competing sensors and how the sensory activity regulates the behavioral response. Our model allows us to make quantitative predictions on signal integration of pH and chemoattractant stimuli. Our study reveals two general conditions and a robust push-pull scheme for precision sensing, which should be applicable in other adaptive sensory systems with opposing gradient sensors. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models
Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...
2017-07-04
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
Simulation of energy buildups in solid-state regenerative amplifiers for 2-μm emitting lasers
NASA Astrophysics Data System (ADS)
Springer, Ramon; Alexeev, Ilya; Heberle, Johannes; Pflaum, Christoph
2018-02-01
A numerical model for solid-state regenerative amplifiers is presented, which is able to precisely simulate the quantitative energy buildup of stretched femtosecond pulses over passed roundtrips in the cavity. In detail, this model is experimentally validated with a Ti:Sapphire regenerative amplifier. Additionally, the simulation of a Ho:YAG based regenerative amplifier is conducted and compared to experimental data from literature. Furthermore, a bifurcation study of the investigated Ho:YAG system is performed, which leads to the identification of stable and instable operation regimes. The presented numerical model exhibits a well agreement to the experimental results from the Ti:Sapphire regenerative amplifier. Also, the gained pulse energy from the Ho:YAG system could be approximated closely, while the mismatch is explained with the monochromatic calculation of pulse amplification. Since the model is applicable to other solid-state gain media, it allows for the efficient design of future amplification systems based on regenerative amplification.
Phase Structure of Strong-Field Tunneling Wave Packets from Molecules.
Liu, Ming-Ming; Li, Min; Wu, Chengyin; Gong, Qihuang; Staudte, André; Liu, Yunquan
2016-04-22
We study the phase structure of the tunneling wave packets from strong-field ionization of molecules and present a molecular quantum-trajectory Monte Carlo model to describe the laser-driven dynamics of photoelectron momentum distributions of molecules. Using our model, we reproduce and explain the alignment-dependent molecular frame photoelectron spectra of strong-field tunneling ionization of N_{2} reported by M. Meckel et al. [Nat. Phys. 10, 594 (2014)]. In addition to modeling the low-energy photoelectron angular distributions quantitatively, we extract the phase structure of strong-field molecular tunneling wave packets, shedding light on its physical origin. The initial phase of the tunneling wave packets at the tunnel exit depends on both the initial transverse momentum distribution and the molecular internuclear distance. We further show that the ionizing molecular orbital has a critical effect on the initial phase of the tunneling wave packets. The phase structure of the photoelectron wave packet is a key ingredient for modeling strong-field molecular photoelectron holography, high-harmonic generation, and molecular orbital imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
NASA Astrophysics Data System (ADS)
Arai, Shun; Nishizawa, Atsushi
2018-05-01
Gravitational waves (GW) are generally affected by modification of a gravity theory during propagation at cosmological distances. We numerically perform a quantitative analysis on Horndeski theory at the cosmological scale to constrain the Horndeski theory by GW observations in a model-independent way. We formulate a parametrization for a numerical simulation based on the Monte Carlo method and obtain the classification of the models that agrees with cosmic accelerating expansion within observational errors of the Hubble parameter. As a result, we find that a large group of the models in the Horndeski theory that mimic cosmic expansion of the Λ CDM model can be excluded from the simultaneous detection of a GW and its electromagnetic transient counterpart. Based on our result and the latest detection of GW170817 and GRB170817A, we conclude that the subclass of Horndeski theory including arbitrary functions G4 and G5 can hardly explain cosmic accelerating expansion without fine-tuning.
Muller, Erik B; Nisbet, Roger M
2014-06-01
Ocean acidification is likely to impact the calcification potential of marine organisms. In part due to the covarying nature of the ocean carbonate system components, including pH and CO2 and CO3(2-) levels, it remains largely unclear how each of these components may affect calcification rates quantitatively. We develop a process-based bioenergetic model that explains how several components of the ocean carbonate system collectively affect growth and calcification rates in Emiliania huxleyi, which plays a major role in marine primary production and biogeochemical carbon cycling. The model predicts that under the IPCC A2 emission scenario, its growth and calcification potential will have decreased by the end of the century, although those reductions are relatively modest. We anticipate that our model will be relevant for many other marine calcifying organisms, and that it can be used to improve our understanding of the impact of climate change on marine systems. © 2014 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, R.; Ciesla, F. J., E-mail: trappitsch@uchicago.edu
2015-05-20
Solar cosmic-ray (SCR) interactions with a protoplanetary disk have been invoked to explain several observations of primitive planetary materials. In our own Solar System, the presence of short-lived radionuclides (SLRs) in the oldest materials has been attributed to spallation reactions induced in phases that were irradiated by energetic particles in the solar nebula. Furthermore, observations of other protoplanetary disks show a mixture of crystalline and amorphous grains, though no correlation between grain crystallinity and disk or stellar properties have been identified. As most models for the origin of crystalline grains would predict such correlations, it was suggested that amorphization bymore » stellar cosmic-rays may be masking or erasing such correlations. Here we quantitatively investigate these possibilities by modeling the interaction of energetic particles emitted by a young star with the surrounding protoplanetary disk. We do this by tracing the energy evolution of SCRs emitted from the young star through the disk and model the amount of time that dust grains would spend in regions where they would be exposed to these particles. We find that this irradiation scenario cannot explain the total SLR content of the solar nebula; however, this scenario could play a role in the amorphization of crystalline material at different locations or epochs of the disk over the course of its evolution.« less
NASA Astrophysics Data System (ADS)
Xue, Liang; Alemu, Tadesse; Gani, Nahid D.; Abdelsalam, Mohamed G.
2018-05-01
We use morphotectonic analysis to study the tectonic uplift history of the southeastern Ethiopian Plateau (SEEP). Based on studies conducted on the Northwestern Ethiopian Plateau, steady-state and pulsed tectonic uplift models were proposed to explain the growth of the plateau since 30 Ma. We test these two models for the largely unknown SEEP. We present the first quantitative morphotectonic study of the SEEP. First, in order to infer the spatial distribution of the tectonic uplift rates, we extract geomorphic proxies including normalized steepness index ksn, hypsometric integral HI, and chi integral χ from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) digital elevation model (DEM). Second, we compare these rates with the thickness of flood basalt that we estimated from geological maps. Third, to constrain the timing of regional tectonic uplift, we develop a knickpoint celerity model. Fourth, we compare our results to those from the Northwestern Ethiopian Plateau to suggest a possible mechanism to explain regional tectonic uplift of the entire Ethiopian Plateau. We find an increase in tectonic uplift rates from the southeastern escarpments of the Afar Depression in the northeast to that of the Main Ethiopian Rift to the southwest. We identify three regional tectonic uplift events at 11.7, 6.5, and 4.5 Ma recorded by the development of regionally distributed knickpoints. This is in good agreement with ages of tectonic uplift events reported from the Northwestern Ethiopian Plateau.
Tomka, Tomas; Iber, Dagmar; Boareto, Marcelo
2018-04-24
The sculpturing of the vertebrate body plan into segments begins with the sequential formation of somites in the presomitic mesoderm (PSM). The rhythmicity of this process is controlled by travelling waves of gene expression. These kinetic waves emerge from coupled cellular oscillators and sweep across the PSM. In zebrafish, the oscillations are driven by autorepression of her genes and are synchronized via Notch signalling. Mathematical modelling has played an important role in explaining how collective properties emerge from the molecular interactions. Increasingly more quantitative experimental data permits the validation of those mathematical models, yet leads to increasingly more complex model formulations that hamper an intuitive understanding of the underlying mechanisms. Here, we review previous efforts, and design a mechanistic model of the her1 oscillator, which represents the experimentally viable her7;hes6 double mutant. This genetically simplified system is ideally suited to conceptually recapitulate oscillatory entrainment and travelling wave formation, and to highlight open questions. It shows that three key parameters, the autorepression delay, the juxtacrine coupling delay, and the coupling strength, are sufficient to understand the emergence of the collective period, the collective amplitude, and the synchronization of neighbouring Her1 oscillators. Moreover, two spatiotemporal time delay gradients, in the autorepression and in the juxtacrine signalling, are required to explain the collective oscillatory dynamics and synchrony of PSM cells. The highlighted developmental principles likely apply more generally to other developmental processes, including neurogenesis and angiogenesis. Copyright © 2018. Published by Elsevier Ltd.
Reconciling fisheries catch and ocean productivity
Stock, Charles A.; Asch, Rebecca G.; Cheung, William W. L.; Dunne, John P.; Friedland, Kevin D.; Lam, Vicky W. Y.; Sarmiento, Jorge L.; Watson, Reg A.
2017-01-01
Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained (r = 0.79) with an energy-based model that (i) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, (ii) depresses trophic transfer efficiencies in the tropics and, less critically, (iii) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change. PMID:28115722
Reconciling fisheries catch and ocean productivity.
Stock, Charles A; John, Jasmin G; Rykaczewski, Ryan R; Asch, Rebecca G; Cheung, William W L; Dunne, John P; Friedland, Kevin D; Lam, Vicky W Y; Sarmiento, Jorge L; Watson, Reg A
2017-02-21
Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained ( r = 0.79) with an energy-based model that ( i ) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, ( ii ) depresses trophic transfer efficiencies in the tropics and, less critically, ( iii ) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
Cycle frequency in standard Rock-Paper-Scissors games: Evidence from experimental economics
NASA Astrophysics Data System (ADS)
Xu, Bin; Zhou, Hai-Jun; Wang, Zhijian
2013-10-01
The Rock-Paper-Scissors (RPS) game is a widely used model system in game theory. Evolutionary game theory predicts the existence of persistent cycles in the evolutionary trajectories of the RPS game, but experimental evidence has remained to be rather weak. In this work, we performed laboratory experiments on the RPS game and analyzed the social-state evolutionary trajectories of twelve populations of N=6 players. We found strong evidence supporting the existence of persistent cycles. The mean cycling frequency was measured to be 0.029±0.009 period per experimental round. Our experimental observations can be quantitatively explained by a simple non-equilibrium model, namely the discrete-time logit dynamical process with a noise parameter. Our work therefore favors the evolutionary game theory over the classical game theory for describing the dynamical behavior of the RPS game.
Communication: On the origin of the non-Arrhenius behavior in water reorientation dynamics.
Stirnemann, Guillaume; Laage, Damien
2012-07-21
We combine molecular dynamics simulations and analytic modeling to determine the origin of the non-Arrhenius temperature dependence of liquid water's reorientation and hydrogen-bond dynamics between 235 K and 350 K. We present a quantitative model connecting hydrogen-bond exchange dynamics to local structural fluctuations, measured by the asphericity of Voronoi cells associated with each water molecule. For a fixed local structure the regular Arrhenius behavior is recovered, and the global anomalous temperature dependence is demonstrated to essentially result from a continuous shift in the unimodal structure distribution upon cooling. The non-Arrhenius behavior can thus be explained without invoking an equilibrium between distinct structures. In addition, the large width of the homogeneous structural distribution is shown to cause a growing dynamical heterogeneity and a non-exponential relaxation at low temperature.
Topological structure prediction in binary nanoparticle superlattices
Travesset, A.
2017-04-27
Systems of spherical nanoparticles with capping ligands have been shown to self-assemble into beautiful superlattices of fascinating structure and complexity. Here, I show that the spherical geometry of the nanoparticle imposes constraints on the nature of the topological defects associated with the capping ligand and that such topological defects control the structure and stability of the superlattices that can be assembled. Furthermore, all of these considerations form the basis for the orbifold topological model (OTM) described in this paper. Finally, the model quantitatively predicts the structure of super-lattices where capping ligands are hydrocarbon chains in excellent agreement with experimental results,more » explains the appearance of low packing fraction lattices as equilibrium, why certain similar structures are more stable (bccAB 6vs. CaB 6, AuCu vs. CsCl, etc.) and many other experimental observations.« less
Charge transport in electrically doped amorphous organic semiconductors.
Yoo, Seung-Jun; Kim, Jang-Joo
2015-06-01
This article reviews recent progress on charge generation by doping and its influence on the carrier mobility in organic semiconductors (OSs). The doping induced charge generation efficiency is generally low in OSs which was explained by the integer charge transfer model and the hybrid charge transfer model. The ionized dopants formed by charge transfer between hosts and dopants can act as Coulomb traps for mobile charges, and the presence of Coulomb traps in OSs broadens the density of states (DOS) in doped organic films. The Coulomb traps strongly reduce the carrier hopping rate and thereby change the carrier mobility, which was confirmed by experiments in recent years. In order to fully understand the doping mechanism in OSs, further quantitative and systematic analyses of charge transport characteristics must be accomplished. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[What can we expect from clinical trials in psychiatry?
Marsot, A; Boucherie, Q; Kheloufi, F; Riff, C; Braunstein, D; Dupouey, J; Guilhaumou, R; Zendjidjian, X; Bonin-Guillaume, S; Fakra, E; Guye, M; Jirsa, V; Azorin, J-M; Belzeaux, R; Adida, M; Micallef, J; Blin, O
2016-12-01
Clinical trials in psychiatry allow to build the regulatory dossiers for market authorization but also to document the mechanism of action of new drugs, to build pharmacodynamics models, evaluate the treatment effects, propose prognosis, efficacy or tolerability biomarkers and altogether to assess the impact of drugs for patient, caregiver and society. However, clinical trials have shown some limitations. Number of recent dossiers failed to convince the regulators. The clinical and biological heterogeneity of psychiatric disorders, the pharmacokinetic and pharmacodynamics properties of the compounds, the lack of translatable biomarkers possibly explain these difficulties. Several breakthrough options are now available: quantitative system pharmacology analysis of drug effects variability, pharmacometry and pharmacoepidemiology, Big Data analysis, brain modelling. In addition to more classical approaches, these opportunities lead to a paradigm change for clinical trials in psychiatry. © L’Encéphale, Paris, 2016.
Põder, Endel
2011-02-16
Dot lattices are very simple multi-stable images where the dots can be perceived as being grouped in different ways. The probabilities of grouping along different orientations as dependent on inter-dot distances along these orientations can be predicted by a simple quantitative model. L. Bleumers, P. De Graef, K. Verfaillie, and J. Wagemans (2008) found that for peripheral presentation, this model should be combined with random guesses on a proportion of trials. The present study shows that the probability of random responses decreases with decreasing ambiguity of lattices and is different for bi-stable and tri-stable lattices. With central presentation, similar effects can be produced by adding positional noise to the dots. The results suggest that different levels of internal positional noise might explain the differences between peripheral and central proximity grouping.
The Evolution of the Intergalactic Medium
NASA Astrophysics Data System (ADS)
McQuinn, Matthew
2016-09-01
The bulk of cosmic matter resides in a dilute reservoir that fills the space between galaxies, the intergalactic medium (IGM). The history of this reservoir is intimately tied to the cosmic histories of structure formation, star formation, and supermassive black hole accretion. Our models for the IGM at intermediate redshifts (2≲z≲5) are a tremendous success, quantitatively explaining the statistics of Lyα absorption of intergalactic hydrogen. However, at both lower and higher redshifts (and around galaxies) much is still unknown about the IGM. We review the theoretical models and measurements that form the basis for the modern understanding of the IGM, and we discuss unsolved puzzles (ranging from the largely unconstrained process of reionization at high z to the missing baryon problem at low z), highlighting the efforts that have the potential to solve them.
Piezoelectric Ceramics and Their Applications
ERIC Educational Resources Information Center
Flinn, I.
1975-01-01
Describes the piezoelectric effect in ceramics and presents a quantitative representation of this effect. Explains the processes involved in the manufacture of piezoelectric ceramics, the materials used, and the situations in which they are applied. (GS)
Bischof, Sylvain; Umhang, Martin; Eicke, Simona; Streb, Sebastian; Qi, Weihong; Zeeman, Samuel C.
2013-01-01
The branched glucans glycogen and starch are the most widespread storage carbohydrates in living organisms. The production of semicrystalline starch granules in plants is more complex than that of small, soluble glycogen particles in microbes and animals. However, the factors determining whether glycogen or starch is formed are not fully understood. The tropical tree Cecropia peltata is a rare example of an organism able to make either polymer type. Electron micrographs and quantitative measurements show that glycogen accumulates to very high levels in specialized myrmecophytic structures (Müllerian bodies), whereas starch accumulates in leaves. Compared with polymers comprising leaf starch, glycogen is more highly branched and has shorter branches—factors that prevent crystallization and explain its solubility. RNA sequencing and quantitative shotgun proteomics reveal that isoforms of all three classes of glucan biosynthetic enzyme (starch/glycogen synthases, branching enzymes, and debranching enzymes) are differentially expressed in Müllerian bodies and leaves, providing a system-wide view of the quantitative programming of storage carbohydrate metabolism. This work will prompt targeted analysis in model organisms and cross-species comparisons. Finally, as starch is the major carbohydrate used for food and industrial applications worldwide, these data provide a basis for manipulating starch biosynthesis in crops to synthesize tailor-made polyglucans. PMID:23632447
Patterns in the English language: phonological networks, percolation and assembly models
NASA Astrophysics Data System (ADS)
Stella, Massimo; Brede, Markus
2015-05-01
In this paper we provide a quantitative framework for the study of phonological networks (PNs) for the English language by carrying out principled comparisons to null models, either based on site percolation, randomization techniques, or network growth models. In contrast to previous work, we mainly focus on null models that reproduce lower order characteristics of the empirical data. We find that artificial networks matching connectivity properties of the English PN are exceedingly rare: this leads to the hypothesis that the word repertoire might have been assembled over time by preferentially introducing new words which are small modifications of old words. Our null models are able to explain the ‘power-law-like’ part of the degree distributions and generally retrieve qualitative features of the PN such as high clustering, high assortativity coefficient and small-world characteristics. However, the detailed comparison to expectations from null models also points out significant differences, suggesting the presence of additional constraints in word assembly. Key constraints we identify are the avoidance of large degrees, the avoidance of triadic closure and the avoidance of large non-percolating clusters.
Modelling of Dictyostelium discoideum movement in a linear gradient of chemoattractant.
Eidi, Zahra; Mohammad-Rafiee, Farshid; Khorrami, Mohammad; Gholami, Azam
2017-11-15
Chemotaxis is a ubiquitous biological phenomenon in which cells detect a spatial gradient of chemoattractant, and then move towards the source. Here we present a position-dependent advection-diffusion model that quantitatively describes the statistical features of the chemotactic motion of the social amoeba Dictyostelium discoideum in a linear gradient of cAMP (cyclic adenosine monophosphate). We fit the model to experimental trajectories that are recorded in a microfluidic setup with stationary cAMP gradients and extract the diffusion and drift coefficients in the gradient direction. Our analysis shows that for the majority of gradients, both coefficients decrease over time and become negative as the cells crawl up the gradient. The extracted model parameters also show that besides the expected drift in the direction of the chemoattractant gradient, we observe a nonlinear dependency of the corresponding variance on time, which can be explained by the model. Furthermore, the results of the model show that the non-linear term in the mean squared displacement of the cell trajectories can dominate the linear term on large time scales.
Dynamic Morphologies and Stability of Droplet Interface Bilayers
NASA Astrophysics Data System (ADS)
Guiselin, Benjamin; Law, Jack O.; Chakrabarti, Buddhapriya; Kusumaatmaja, Halim
2018-06-01
We develop a theoretical framework for understanding dynamic morphologies and stability of droplet interface bilayers (DIBs), accounting for lipid kinetics in the monolayers and bilayer, and droplet evaporation due to imbalance between osmotic and Laplace pressures. Our theory quantitatively describes distinct pathways observed in experiments when DIBs become unstable. We find that when the timescale for lipid desorption is slow compared to droplet evaporation, the lipid bilayer will grow and the droplets approach a hemispherical shape. In contrast, when lipid desorption is fast, the bilayer area will shrink and the droplets eventually detach. Our model also suggests there is a critical size below which DIBs can become unstable, which may explain experimental difficulties in miniaturizing the DIB platform.
Direction-dependent stability of skyrmion lattice in helimagnets induced by exchange anisotropy
NASA Astrophysics Data System (ADS)
Hu, Yangfan
2018-06-01
Exchange anisotropy provides a direction dependent mechanism for the stability of the skyrmion lattice phase in noncentrosymmetric bulk chiral magnets. Based on the Fourier representation of the skyrmion lattice, we explain the direction dependence of the temperature-magnetic field phase diagram for bulk MnSi through a phenomenological mean-field model incorporating exchange anisotropy. Through quantitative comparison with experimental results, we clarify that the stability of the skyrmion lattice phase in bulk MnSi is determined by a combined effect of negative exchange anisotropy and thermal fluctuation. The effect of exchange anisotropy and the order of Fourier representation on the equilibrium properties of the skyrmion lattice is discussed in detail.
Bethge, Anja; Schumacher, Udo
2017-01-01
Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953
Qu, Yanfei; Ma, Yongwen; Wan, Jinquan; Wang, Yan
2018-06-01
The silicon oil-air partition coefficients (K SiO/A ) of hydrophobic compounds are vital parameters for applying silicone oil as non-aqueous-phase liquid in partitioning bioreactors. Due to the limited number of K SiO/A values determined by experiment for hydrophobic compounds, there is an urgent need to model the K SiO/A values for unknown chemicals. In the present study, we developed a universal quantitative structure-activity relationship (QSAR) model using a sequential approach with macro-constitutional and micromolecular descriptors for silicone oil-air partition coefficients (K SiO/A ) of hydrophobic compounds with large structural variance. The geometry optimization and vibrational frequencies of each chemical were calculated using the hybrid density functional theory at the B3LYP/6-311G** level. Several quantum chemical parameters that reflect various intermolecular interactions as well as hydrophobicity were selected to develop QSAR model. The result indicates that a regression model derived from logK SiO/A , the number of non-hydrogen atoms (#nonHatoms) and energy gap of E LUMO and E HOMO (E LUMO -E HOMO ) could explain the partitioning mechanism of hydrophobic compounds between silicone oil and air. The correlation coefficient R 2 of the model is 0.922, and the internal and external validation coefficient, Q 2 LOO and Q 2 ext , are 0.91 and 0.89 respectively, implying that the model has satisfactory goodness-of-fit, robustness, and predictive ability and thus provides a robust predictive tool to estimate the logK SiO/A values for chemicals in application domain. The applicability domain of the model was visualized by the Williams plot.
Bertaux, François; Stoma, Szymon; Drasdo, Dirk; Batt, Gregory
2014-01-01
Isogenic cells sensing identical external signals can take markedly different decisions. Such decisions often correlate with pre-existing cell-to-cell differences in protein levels. When not neglected in signal transduction models, these differences are accounted for in a static manner, by assuming randomly distributed initial protein levels. However, this approach ignores the a priori non-trivial interplay between signal transduction and the source of this cell-to-cell variability: temporal fluctuations of protein levels in individual cells, driven by noisy synthesis and degradation. Thus, modeling protein fluctuations, rather than their consequences on the initial population heterogeneity, would set the quantitative analysis of signal transduction on firmer grounds. Adopting this dynamical view on cell-to-cell differences amounts to recast extrinsic variability into intrinsic noise. Here, we propose a generic approach to merge, in a systematic and principled manner, signal transduction models with stochastic protein turnover models. When applied to an established kinetic model of TRAIL-induced apoptosis, our approach markedly increased model prediction capabilities. One obtains a mechanistic explanation of yet-unexplained observations on fractional killing and non-trivial robust predictions of the temporal evolution of cell resistance to TRAIL in HeLa cells. Our results provide an alternative explanation to survival via induction of survival pathways since no TRAIL-induced regulations are needed and suggest that short-lived anti-apoptotic protein Mcl1 exhibit large and rare fluctuations. More generally, our results highlight the importance of accounting for stochastic protein turnover to quantitatively understand signal transduction over extended durations, and imply that fluctuations of short-lived proteins deserve particular attention. PMID:25340343
The long-term evolution of multilocus traits under frequency-dependent disruptive selection.
van Doorn, G Sander; Dieckmann, Ulf
2006-11-01
Frequency-dependent disruptive selection is widely recognized as an important source of genetic variation. Its evolutionary consequences have been extensively studied using phenotypic evolutionary models, based on quantitative genetics, game theory, or adaptive dynamics. However, the genetic assumptions underlying these approaches are highly idealized and, even worse, predict different consequences of frequency-dependent disruptive selection. Population genetic models, by contrast, enable genotypic evolutionary models, but traditionally assume constant fitness values. Only a minority of these models thus addresses frequency-dependent selection, and only a few of these do so in a multilocus context. An inherent limitation of these remaining studies is that they only investigate the short-term maintenance of genetic variation. Consequently, the long-term evolution of multilocus characters under frequency-dependent disruptive selection remains poorly understood. We aim to bridge this gap between phenotypic and genotypic models by studying a multilocus version of Levene's soft-selection model. Individual-based simulations and deterministic approximations based on adaptive dynamics theory provide insights into the underlying evolutionary dynamics. Our analysis uncovers a general pattern of polymorphism formation and collapse, likely to apply to a wide variety of genetic systems: after convergence to a fitness minimum and the subsequent establishment of genetic polymorphism at multiple loci, genetic variation becomes increasingly concentrated on a few loci, until eventually only a single polymorphic locus remains. This evolutionary process combines features observed in quantitative genetics and adaptive dynamics models, and it can be explained as a consequence of changes in the selection regime that are inherent to frequency-dependent disruptive selection. Our findings demonstrate that the potential of frequency-dependent disruptive selection to maintain polygenic variation is considerably smaller than previously expected.
van Boven, Michiel; van de Kassteele, Jan; Korndewal, Marjolein J; van Dorp, Christiaan H; Kretzschmar, Mirjam; van der Klis, Fiona; de Melker, Hester E; Vossen, Ann C; van Baarle, Debbie
2017-09-01
Human cytomegalovirus (CMV) is a herpes virus with poorly understood transmission dynamics. Person-to-person transmission is thought to occur primarily through transfer of saliva or urine, but no quantitative estimates are available for the contribution of different infection routes. Using data from a large population-based serological study (n = 5,179), we provide quantitative estimates of key epidemiological parameters, including the transmissibility of primary infection, reactivation, and re-infection. Mixture models are fitted to age- and sex-specific antibody response data from the Netherlands, showing that the data can be described by a model with three distributions of antibody measurements, i.e. uninfected, infected, and infected with increased antibody concentration. Estimates of seroprevalence increase gradually with age, such that at 80 years 73% (95%CrI: 64%-78%) of females and 62% (95%CrI: 55%-68%) of males are infected, while 57% (95%CrI: 47%-67%) of females and 37% (95%CrI: 28%-46%) of males have increased antibody concentration. Merging the statistical analyses with transmission models, we find that models with infectious reactivation (i.e. reactivation that can lead to the virus being transmitted to a novel host) fit the data significantly better than models without infectious reactivation. Estimated reactivation rates increase from low values in children to 2%-4% per year in women older than 50 years. The results advance a hypothesis in which transmission from adults after infectious reactivation is a key driver of transmission. We discuss the implications for control strategies aimed at reducing CMV infection in vulnerable groups.
NASA Astrophysics Data System (ADS)
Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh
2018-04-01
A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.
Dietary patterns in pregnancy and birth weight.
Coelho, Natália de Lima Pereira; Cunha, Diana Barbosa; Esteves, Ana Paula Pereira; Lacerda, Elisa Maria de Aquino; Theme Filha, Mariza Miranda
2015-01-01
OBJECTIVE To analyze if dietary patterns during the third gestational trimester are associated with birth weight.METHODS Longitudinal study conducted in the cities of Petropolis and Queimados, Rio de Janeiro (RJ), Southeastern Brazil, between 2007 and 2008. We analyzed data from the first and second follow-up wave of a prospective cohort. Food consumption of 1,298 pregnant women was assessed using a semi-quantitative questionnaire about food frequency. Dietary patterns were obtained by exploratory factor analysis, using the Varimax rotation method. We also applied the multivariate linear regression model to estimate the association between food consumption patterns and birth weight.RESULTS Four patterns of consumption - which explain 36.4% of the variability - were identified and divided as follows: (1) prudent pattern (milk, yogurt, cheese, fruit and fresh-fruit juice, cracker, and chicken/beef/fish/liver), which explained 14.9% of the consumption; (2) traditional pattern, consisting of beans, rice, vegetables, breads, butter/margarine and sugar, which explained 8.8% of the variation in consumption; (3) Western pattern (potato/cassava/yams, macaroni, flour/farofa/grits, pizza/hamburger/deep fried pastries, soft drinks/cool drinks and pork/sausages/egg), which accounts for 6.9% of the variance; and (4) snack pattern (sandwich cookie, salty snacks, chocolate, and chocolate drink mix), which explains 5.7% of the consumption variability. The snack dietary pattern was positively associated with birth weight (β = 56.64; p = 0.04) in pregnant adolescents.CONCLUSIONS For pregnant adolescents, the greater the adherence to snack pattern during pregnancy, the greater the baby's birth weight.
An Individualized Approach to Introductory Physics
ERIC Educational Resources Information Center
Rigden, John S.
1970-01-01
Explains individualization of a physics course in terms of organization, testing, and philosophy. Organization of laboratory and lecture is focused on two topics, classical mechanics and relativity theory. The testing consists of quantitative and qualitative questions. (DS)
A new quantitative approach to measure perceived work-related stress in Italian employees.
Cevenini, Gabriele; Fratini, Ilaria; Gambassi, Roberto
2012-09-01
We propose a method for a reliable quantitative measure of subjectively perceived occupational stress applicable in any company to enhance occupational safety and psychosocial health, to enable precise prevention policies and intervention and to improve work quality and efficiency. A suitable questionnaire was telephonically administered to a stratified sample of the whole Italian population of employees. Combined multivariate statistical methods, including principal component, cluster and discriminant analyses, were used to identify risk factors and to design a causal model for understanding work-related stress. The model explained the causal links of stress through employee perception of imbalance between job demands and resources for responding appropriately, by supplying a reliable U-shaped nonlinear stress index, expressed in terms of values of human systolic arterial pressure. Low, intermediate and high values indicated demotivation (or inefficiency), well-being and distress, respectively. Costs for stress-dependent productivity shortcomings were estimated to about 3.7% of national income from employment. The method identified useful structured information able to supply a simple and precise interpretation of employees' well-being and stress risk. Results could be compared with estimated national benchmarks to enable targeted intervention strategies to protect the health and safety of workers, and to reduce unproductive costs for firms.
Quantitative Analysis of Critical Factors for the Climate Impact of Landfill Mining.
Laner, David; Cencic, Oliver; Svensson, Niclas; Krook, Joakim
2016-07-05
Landfill mining has been proposed as an innovative strategy to mitigate environmental risks associated with landfills, to recover secondary raw materials and energy from the deposited waste, and to enable high-valued land uses at the site. The present study quantitatively assesses the importance of specific factors and conditions for the net contribution of landfill mining to global warming using a novel, set-based modeling approach and provides policy recommendations for facilitating the development of projects contributing to global warming mitigation. Building on life-cycle assessment, scenario modeling and sensitivity analysis methods are used to identify critical factors for the climate impact of landfill mining. The net contributions to global warming of the scenarios range from -1550 (saving) to 640 (burden) kg CO2e per Mg of excavated waste. Nearly 90% of the results' total variation can be explained by changes in four factors, namely the landfill gas management in the reference case (i.e., alternative to mining the landfill), the background energy system, the composition of the excavated waste, and the applied waste-to-energy technology. Based on the analyses, circumstances under which landfill mining should be prioritized or not are identified and sensitive parameters for the climate impact assessment of landfill mining are highlighted.
Adzei, Francis A; Atinga, Roger A
2012-01-01
This study seeks to undertake a systematic review to consolidate existing empirical evidence on the impact of financial and non-financial incentives on motivation and retention of health workers in Ghana's district hospitals. The study employed a purely quantitative design with a sample of 285 health workers from ten district hospitals in four regions of Ghana. A stepwise regression model was used in the analysis. The study found that financial incentives significantly influence motivation and intention to remain in the district hospital. Further, of the four factor model of the non-financial incentives, only three (leadership skill and supervision, opportunities for continuing professional development and availability of infrastructure and resources) were predictors of motivation and retention. A major limitation of the study is that the sample of health workers was biased towards nurses (n = 160; 56.1 percent). This is explained by their large presence in remote districts in Ghana. A qualitative approach could enrich the findings by bringing out the many complex views of health workers regarding issues of motivation and retention, since quantitative studies are better applied to establish causal relationships. The findings suggest that appropriate legislations backing salary supplements, commitment-based bonus payments with a set of internal regulations and leadership with sound managerial qualities are required to pursue workforce retention in district hospitals.
Estimated long-term outdoor air pollution concentrations in a cohort study
NASA Astrophysics Data System (ADS)
Beelen, Rob; Hoek, Gerard; Fischer, Paul; Brandt, Piet A. van den; Brunekreef, Bert
Several recent studies associated long-term exposure to air pollution with increased mortality. An ongoing cohort study, the Netherlands Cohort Study on Diet and Cancer (NLCS), was used to study the association between long-term exposure to traffic-related air pollution and mortality. Following on a previous exposure assessment study in the NLCS, we improved the exposure assessment methods. Long-term exposure to nitrogen dioxide (NO 2), nitrogen oxide (NO), black smoke (BS), and sulphur dioxide (SO 2) was estimated. Exposure at each home address ( N=21 868) was considered as a function of a regional, an urban and a local component. The regional component was estimated using inverse distance weighed interpolation of measurement data from regional background sites in a national monitoring network. Regression models with urban concentrations as dependent variables, and number of inhabitants in different buffers and land use variables, derived with a Geographic Information System (GIS), as predictor variables were used to estimate the urban component. The local component was assessed using a GIS and a digital road network with linked traffic intensities. Traffic intensity on the nearest road and on the nearest major road, and the sum of traffic intensity in a buffer of 100 m around each home address were assessed. Further, a quantitative estimate of the local component was estimated. The regression models to estimate the urban component explained 67%, 46%, 49% and 35% of the variances of NO 2, NO, BS, and SO 2 concentrations, respectively. Overall regression models which incorporated the regional, urban and local component explained 84%, 44%, 59% and 56% of the variability in concentrations for NO 2, NO, BS and SO 2, respectively. We were able to develop an exposure assessment model using GIS methods and traffic intensities that explained a large part of the variations in outdoor air pollution concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethna, J.P.; Krumhansl, J.A.
1994-08-01
We have identified tweed precursors to martensitic phase transformations as a spin glass phase due to composition variations, and used simulations and exact replica theory predictions to predict diffraction peaks and model phase diagrams, and provide real space data for comparison to transmission electron micrograph images. We have used symmetry principles to derive the crack growth laws for mixed-mode brittle fracture, explaining the results for two-dimensional fracture and deriving the growth laws in three dimensions. We have used recent advances in dynamical critical phenomena to study hysteresis in disordered systems, explaining the return-point-memory effect, predicting distributions for Barkhausen noise, andmore » elucidating the transition from athermal to burst behavior in martensites. From a nonlinear lattice-dynamical model of a first-order transition using simulations, finite-size scaling, and transfer matrix methods, it is shown that heterophase transformation precursors cannot occur in a pure homogeneous system, thus emphasizing the role of disorder in real materials. Full integration of nonlinear Landau-Ginzburg continuum theory with experimental neutron-scattering data and first-principles calculations has been carried out to compute semi-quantitative values of the energy and thickness of twin boundaries in InTl and FePd martensites.« less
NASA Astrophysics Data System (ADS)
Alpert, Peter A.; Knopf, Daniel A.
2016-02-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
Skovhus, Torben Lund; Eckert, Richard B; Rodrigues, Edgar
2017-08-20
Microbiologically influenced corrosion (MIC) is the terminology applied where the actions of microorganisms influence the corrosion process. In literature, terms such as microbial corrosion, biocorrosion, microbially influenced/induced corrosion, and biodegradation are often applied. MIC research in the oil and gas industry has seen a revolution over the past decade, with the introduction of molecular microbiological methods: (MMM) as well as new industry standards and procedures of sampling biofilm and corrosion products from the process system. This review aims to capture the most important trends the oil and gas industry has seen regarding MIC research over the past decade. The paper starts out with an overview of where in the process stream MIC occurs - from the oil reservoir to the consumer. Both biotic and abiotic corrosion mechanisms are explained in the context of managing MIC using a structured corrosion management (CM) approach. The corrosion management approach employs the elements of a management system to ensure that essential corrosion control activities are carried out in an effective, sustainable, well-planned and properly executed manner. The 3-phase corrosion management approach covering of both biotic and abiotic internal corrosion mechanisms consists of 1) corrosion assessment, 2) corrosion mitigation and 3) corrosion monitoring. Each of the three phases are described in detail with links to recent field cases, methods, industry standards and sampling protocols. In order to manage the corrosion threat, operators commonly use models to support decision making. The models use qualitative, semi-quantitative or quantitative measures to help assess the rate of degradation caused by MIC. The paper reviews four existing models for MIC Threat Assessment and describe a new model that links the threat of MIC in the oil processing system located on an offshore platform with a Risk Based Inspection (RBI) approach. A recent field case highlights and explains the conflicting historic results obtained through serial dilution of culture media using the most probable number (MPN) method as compared to data obtained from corrosion monitoring and the quantitative polymerase chain reaction (qPCR) method. Results from qPCR application in the field case have changed the way MIC is monitored on the oil production facility in the North Sea. A number of high quality resources have been published as technical conference papers, books, educational videos and peer-reviewed scientific papers, and thus we end the review with an updated list of state-of-the-art resources for anyone desiring to become more familiar with the topic of MIC in the upstream oil and gas sector. Copyright © 2017 Elsevier B.V. All rights reserved.
Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2015-08-01
Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain initiation. Calculation of bone elastic moduli from image data is a basic step when constructing finite element models. However, different relationships between elastic moduli and imaged density (known as density-modulus relationships) have been reported in the literature. The objective of this study was to apply seven different trabecular-specific and two cortical-specific density-modulus relationships from the literature to finite element models of proximal tibia subchondral bone, and identify the relationship(s) that best predicted experimentally measured local subchondral structural stiffness with highest explained variance and least error. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using published density-modulus relationships and mapped to corresponding finite element models. Proximal tibial structural stiffness values were compared to experimentally measured stiffness values from in-situ macro-indentation testing directly on the subchondral bone surface (47 indentation points). Regression lines between experimentally measured and finite element calculated stiffness had R(2) values ranging from 0.56 to 0.77. Normalized root mean squared error varied from 16.6% to 337.6%. Of the 21 evaluated density-modulus relationships in this study, Goulet combined with Snyder and Schneider or Rho appeared most appropriate for finite element modeling of local subchondral bone structural stiffness. Though, further studies are needed to optimize density-modulus relationships and improve finite element estimates of local subchondral bone structural stiffness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effect of viscosity on tear drainage and ocular residence time.
Zhu, Heng; Chauhan, Anuj
2008-08-01
An increase in residence time of dry eye medications including artificial tears will likely enhance therapeutic benefits. The drainage rates and the residence time of eye drops depend on the viscosity of the instilled fluids. However, a quantitative understanding of the dependence of drainage rates and the residence time on viscosity is lacking. The current study aims to develop a mathematical model for the drainage of Newtonian fluids and also for power-law non-Newtonian fluids of different viscosities. This study is an extension of our previous study on the mathematical model of tear drainage. The tear drainage model is modified to describe the drainage of Newtonian fluids with viscosities higher than the tear viscosity and power-law non-Newtonian fluids with rheological parameters obtained from fitting experimental data in literature. The drainage rate through canaliculi was derived from the modified drainage model and was incorporated into a tear mass balance to calculate the transients of total solute quantity in ocular fluids and the bioavailability of instilled drugs. For Newtonian fluids, increasing the viscosity does not affect the drainage rate unless the viscosity exceeds a critical value of about 4.4 cp. The viscosity has a maximum impact on drainage rate around a value of about 100 cp. The trends are similar for shear thinning power law fluids. The transients of total solute quantity, and the residence time agrees at least qualitatively with experimental studies. A mathematical model has been developed for the drainage of Newtonian fluids and power-law fluids through canaliculi. The model can quantitatively explain different experimental observations on the effect of viscosity on the residence of instilled fluids on the ocular surface. The current study is helpful for understanding the mechanism of fluid drainage from the ocular surface and for improving the design of dry eye treatments.
Homeopathic potentization based on nanoscale domains.
Czerlinski, George; Ypma, Tjalling
2011-12-01
The objectives of this study were to present a simple descriptive and quantitative model of how high potencies in homeopathy arise. The model begins with the mechanochemical production of hydrogen and hydroxyl radicals from water and the electronic stabilization of the resulting nanodomains of water molecules. The life of these domains is initially limited to a few days, but may extend to years when the electromagnetic characteristic of a homeopathic agent is copied onto the domains. This information is transferred between the original agent and the nanodomains, and also between previously imprinted nanodomains and new ones. The differential equations previously used to describe these processes are replaced here by exponential expressions, corresponding to simplified model mechanisms. Magnetic stabilization is also involved, since these long-lived domains apparently require the presence of the geomagnetic field. Our model incorporates this factor in the formation of the long-lived compound. Numerical simulation and graphs show that the potentization mechanism can be described quantitatively by a very simplified mechanism. The omitted factors affect only the fine structure of the kinetics. Measurements of pH changes upon absorption of different electromagnetic frequencies indicate that about 400 nanodomains polymerize to form one cooperating unit. Singlet excited states of some compounds lead to dramatic changes in their hydrogen ion dissociation constant, explaining this pH effect and suggesting that homeopathic information is imprinted as higher singlet excited states. A simple description is provided of the process of potentization in homeopathic dilutions. With the exception of minor details, this simple model replicates the results previously obtained from a more complex model. While excited states are short lived in isolated molecules, they become long lived in nanodomains that form coherent cooperative aggregates controlled by the geomagnetic field. These domains either slowly emit biophotons or perform specific biochemical work at their target.
Hendriks, A Jan; Traas, Theo P; Huijbregts, Mark A J
2005-05-01
To protect thousands of species from thousands of chemicals released in the environment, various risk assessment tools have been developed. Here, we link quantitative structure-activity relationships (QSARs) for response concentrations in water (LC50) to critical concentrations in organisms (C50) by a model for accumulation in lipid or non-lipid phases versus water Kpw. The model indicates that affinity for neutral body components such as storage fat yields steep Kpw-Kow relationships, whereas slopes for accumulation in polar phases such as proteins are gentle. This pattern is confirmed by LC50 QSARs for different modes of action, such as neutral versus polar narcotics and organochlorine versus organophosphor insecticides. LC50 QSARs were all between 0.00002 and 0.2Kow(-1). After calibrating the model with the intercepts and, for the first time also, with the slopes of the LC50 QSARs, critical concentrations in organisms C50 are calculated and compared to an independent validation data set. About 60% of the variability in lethal body burdens C50 is explained by the model. Explanations for differences between estimated and measured levels for 11 modes of action are discussed. In particular, relationships between the critical concentrations in organisms C50 and chemical (Kow) or species (lipid content) characteristics are specified and tested. The analysis combines different models proposed before and provides a substantial extension of the data set in comparison to previous work. Moreover, the concept is applied to species (e.g., plants, lean animals) and substances (e.g., specific modes of action) that were scarcely studied quantitatively so far.
NASA Astrophysics Data System (ADS)
Okumura, Hideyuki
In this study, the magnetic behavior including coercivity and the magnetic phase transition (ferromagnetic ↔ paramagnetic) and related phenomena were qualitatively and quantitatively investigated in ultra-fine grained/nanostructured FePd permanent magnet alloys, in relation to the microstructure and defect structure, and the results were compared with bulk FePd. Most of the alloy specimens investigated were in the form of epoxybonded magnets or isostatically-pressed pellets, formed from powders which were produced with high energy ball milling. Some results of thin films and ribbons produced with sputtering and melt-spinning, respectively, are also included in this thesis. Characterization of the materials was performed by using X-ray diffraction techniques with texture measurement, transmission electron microscopy with Lorentz microscopy, scanning electron microscopy with EDS analysis, optical microscopy and vibrating sample magnetometry. X-ray line broadening analysis was utilized for the quantitative characterization of the nanoscale microstructure, and it was found that the Cauchy-Gaussian profile assumption best describes the broadening data. Enhanced coercivities ˜10 times those of the bulk FePd obtained using conventional heat treatments were explained as the result of statistical (stochastic) unpinning of interaction domain walls out of the potential well at the grain boundary, and there is also an additional effect ascribed to an increase of the magnetocrystalline anisotropy, which is mainly due to the metastable c/a ratio of the nanostructured ordered phase and possibly to stress anisotropy. At the same time, there is also a decrease of the coercivity for smaller grain sizes because of the "magnetically soft" grain boundary phase. A semi-quantitative theoretical model is proposed, which includes the effect of exchange coupling between the ordered grains. The so-called Kronmuller analysis based on the wall pinning model was self-consistent, supporting the notion that wall pinning by grain boundary is the dominant mechanism controlling the coercivity in the nanostructured aggregates in which the magnetic structure is comprised of interaction domains. Furthermore, conventionally structure-insensitive, intrinsic properties such as the saturation magnetization and Curie temperature were found to become structure-sensitive in these materials. The results were semi-quantitatively explained by consideration of the extraordinary microstructure and defect structure involving the high and complex strain fields, metastable tetragonalities, nonequilibrium grain boundaries, extremely high surface-to-volume ratios and perturbed coordination spheres. The possible change in the atomic bond character particularly around grain boundaries is also briefly discussed. It seems that there is a significant fluctuation in exchange couplings at the grain boundary volume, causing the variation of the saturation magnetization, while for the variation of the Curie temperature the powder surface instead of the grain boundary is more important. A modified localized moment model and thus Hund's rules seem applicable to the FePd alloy systems, and the spin density fluctuations seem small in the FePd alloys.
Modeling Fan Effects on the Time Course of Associative Recognition
Schneider, Darryl W.; Anderson, John R.
2011-01-01
We investigated the time course of associative recognition using the response signal procedure, whereby a stimulus is presented and followed after a variable lag by a signal indicating that an immediate response is required. More specifically, we examined the effects of associative fan (the number of associations that an item has with other items in memory) on speed–accuracy tradeoff functions obtained in a previous response signal experiment involving briefly studied materials and in a new experiment involving well-learned materials. High fan lowered asymptotic accuracy or the rate of rise in accuracy across lags, or both. We developed an Adaptive Control of Thought–Rational (ACT-R) model for the response signal procedure to explain these effects. The model assumes that high fan results in weak associative activation that slows memory retrieval, thereby decreasing the probability that retrieval finishes in time and producing a speed–accuracy tradeoff function. The ACT-R model provided an excellent account of the data, yielding quantitative fits that were as good as those of the best descriptive model for response signal data. PMID:22197797
Trainor, Thomas A.; Ray, R. L.
2011-09-09
A glasma flux-tube model has been proposed to explain strong elongation on pseudorapidity η of the same-side two-dimensional (2D) peak in minimum-bias angular correlations from √( sNN)=200 GeV Au-Au collisions. The same-side peak or “soft ridge” is said to arise from coupling of flux tubes to radial flow whereby gluons radiated transversely from flux tubes are boosted by radial flow to form a narrow structure or ridge on azimuth. In this study we test the theory conjecture by comparing measurements to predictions for particle production, spectra, and correlations from the glasma model and from conventional fragmentation processes. We conclude thatmore » the glasma model is contradicted by measured hadron yields, spectra, and correlations, whereas a two-component model of hadron production, including minimum-bias parton fragmentation, provides a quantitative description of most features of the data, although η elongation of the same-side 2D peak remains undescribed.« less
Aref, S
1982-01-01
A study of the migration of fourth stage larvae of the parasite Strongylus vulgaris in the intestinal arteries of the horse is presented. It is established, that the larvae migrate along the arteries in almost straight lines. It is suggested that this is primarily due to their ability to sense the curvature of the vessel wall, and not, as might have been expected, because of an ability to sense the direction of blood flow. A larva will sometimes alter its direction of motion when encountering a small off-branching artery. This behaviour suggests, that the migration of S. vulgaris larvae can be modeled as a one-dimensional discrete random walk on a long time scale. This model is simpler than any deterministic model and, in particular, does not require the existence of a predilection site. The available data is not, however, sufficient for a convincing, quantitative test of the model. The proposed reluctance of the larvae to bend into off-branching arteries is used to explain the crowding of larvae in the cranial mesenteric artery.
Anomalous Transport of Cosmic Rays in a Nonlinear Diffusion Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litvinenko, Yuri E.; Fichtner, Horst; Walter, Dominik
2017-05-20
We investigate analytically and numerically the transport of cosmic rays following their escape from a shock or another localized acceleration site. Observed cosmic-ray distributions in the vicinity of heliospheric and astrophysical shocks imply that anomalous, superdiffusive transport plays a role in the evolution of the energetic particles. Several authors have quantitatively described the anomalous diffusion scalings, implied by the data, by solutions of a formal transport equation with fractional derivatives. Yet the physical basis of the fractional diffusion model remains uncertain. We explore an alternative model of the cosmic-ray transport: a nonlinear diffusion equation that follows from a self-consistent treatmentmore » of the resonantly interacting cosmic-ray particles and their self-generated turbulence. The nonlinear model naturally leads to superdiffusive scalings. In the presence of convection, the model yields a power-law dependence of the particle density on the distance upstream of the shock. Although the results do not refute the use of a fractional advection–diffusion equation, they indicate a viable alternative to explain the anomalous diffusion scalings of cosmic-ray particles.« less
Behavioral variability of choices versus structural inconsistency of preferences.
Regenwetter, Michel; Davis-Stober, Clintin P
2012-04-01
Theories of rational choice often make the structural consistency assumption that every decision maker's binary strict preference among choice alternatives forms a strict weak order. Likewise, the very concept of a utility function over lotteries in normative, prescriptive, and descriptive theory is mathematically equivalent to strict weak order preferences over those lotteries, while intransitive heuristic models violate such weak orders. Using new quantitative interdisciplinary methodologies, we dissociate the variability of choices from the structural inconsistency of preferences. We show that laboratory choice behavior among stimuli of a classical "intransitivity" paradigm is, in fact, consistent with variable strict weak order preferences. We find that decision makers act in accordance with a restrictive mathematical model that, for the behavioral sciences, is extraordinarily parsimonious. Our findings suggest that the best place to invest future behavioral decision research is not in the development of new intransitive decision models but rather in the specification of parsimonious models consistent with strict weak order(s), as well as heuristics and other process models that explain why preferences appear to be weakly ordered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, Brendan W.
In aquifers consisting of fractured or porous igneous rocks, as well as conglomerate and sandstone products of volcanic formations, silicate minerals actively dissolve and precipitate (Eby, 2004; Eriksson, 1985; Drever, 1982). Dissolution of hydrated volcanic glass is also known to influence the character of groundwater to which it is exposed (White et al., 1980). Hydrochemical evolution, within saturated zones of volcanic formations, is modeled here as a means to resolve the sources feeding a perched groundwater zone. By observation of solute mass balances in groundwater, together with rock chemistry, this study characterizes the chemical weathering processes active along recharge pathwaysmore » in a mountain front system. Inverse mass balance modeling, which accounts for mass fluxes between solid phases and solution, is used to contrive sets of quantitative reactions that explain chemical variability of water between sampling points. Model results are used, together with chloride mass balance estimation, to evaluate subsurface mixing scenarios generated by further modeling. Final model simulations estimate contributions of mountain block and local recharge to various contaminated zones.« less
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
Kaveh, Kamran; Takahashi, Yutaka; Farrar, Michael A; Storme, Guy; Guido, Marcucci; Piepenburg, Jamie; Penning, Jackson; Foo, Jasmine; Leder, Kevin Z; Hui, Susanta K
2017-07-01
Philadelphia chromosome-positive (Ph+) acute lymphoblastic leukemia (ALL) is characterized by a very poor prognosis and a high likelihood of acquired chemo-resistance. Although tyrosine kinase inhibitor (TKI) therapy has improved clinical outcome, most ALL patients relapse following treatment with TKI due to the development of resistance. We developed an in vitro model of Nilotinib-resistant Ph+ leukemia cells to investigate whether low dose radiation (LDR) in combination with TKI therapy overcome chemo-resistance. Additionally, we developed a mathematical model, parameterized by cell viability experiments under Nilotinib treatment and LDR, to explain the cellular response to combination therapy. The addition of LDR significantly reduced drug resistance both in vitro and in computational model. Decreased expression level of phosphorylated AKT suggests that the combination treatment plays an important role in overcoming resistance through the AKT pathway. Model-predicted cellular responses to the combined therapy provide good agreement with experimental results. Augmentation of LDR and Nilotinib therapy seems to be beneficial to control Ph+ leukemia resistance and the quantitative model can determine optimal dosing schedule to enhance the effectiveness of the combination therapy.
NASA Astrophysics Data System (ADS)
Ge, C.; Wang, J.; Yang, Z.; Hyer, E. J.; Reid, J. S.; Chew, B.; Mahamod, M.
2011-12-01
The online-coupled Weather Research and Forecasting model with Chemistry (WRF-Chem) is used in conjunction with the FLAMBE MODIS-based biomass burning emissions to simulate the transport of smoke particles over the southeast Asian Maritime Continent (MC, 10°S - 10°N, 90°E-150°E) during September - October 2006 when the moderate El Nino event caused the largest region biomass burning outbreak since 1998. The modeled smoke transport pathway is found to be consistent with the MODIS true color images. Quantitatively, the modeled smoke particle mass can explain ~50% of temporal variability in 24-hour average observed PM10 at most ground stations, with linear correlation coefficients often larger than 0.7. Analysis of CALIOP data shows that smoke aerosols are primarily located within 3.5 km above the surface, and we found that smoke injection height in the model should be at ~800 m above surface to best match CALIOP observations downwind, instead of 2 km as used in the past literature. Comparison of CALIOP data in October 2006 with that in other years (2007-2010) reveals that the peak of aerosol extinction always occurs at ~1 km above surface, but smoke events in 2006 doubled the aerosol extinction from the surface to 3.5 km. Numerical experiments further show that the Tama Abu topography in Malaysia Peninsula has a significant impact on smoke transport and the surface in the vicinity. A conceptual model, based upon our analysis of two-month WRFchem simulation and satellite data, is proposed to explain the meteorological causes for smoke layers above the clouds as seen in the CALIOP data.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2012-09-01
A quantitative determinants-of-exposure analysis of respirable crystalline silica (RCS) levels in the construction industry was performed using a database compiled from an extensive literature review. Statistical models were developed to predict work-shift exposure levels by trade. Monte Carlo simulation was used to recreate exposures derived from summarized measurements which were combined with single measurements for analysis. Modeling was performed using Tobit models within a multimodel inference framework, with year, sampling duration, type of environment, project purpose, project type, sampling strategy and use of exposure controls as potential predictors. 1346 RCS measurements were included in the analysis, of which 318 were non-detects and 228 were simulated from summary statistics. The model containing all the variables explained 22% of total variability. Apart from trade, sampling duration, year and strategy were the most influential predictors of RCS levels. The use of exposure controls was associated with an average decrease of 19% in exposure levels compared to none, and increased concentrations were found for industrial, demolition and renovation projects. Predicted geometric means for year 1999 were the highest for drilling rig operators (0.238 mg m(-3)) and tunnel construction workers (0.224 mg m(-3)), while the estimated exceedance fraction of the ACGIH TLV by trade ranged from 47% to 91%. The predicted geometric means in this study indicated important overexposure compared to the TLV. However, the low proportion of variability explained by the models suggests that the construction trade is only a moderate predictor of work-shift exposure levels. The impact of the different tasks performed during a work shift should also be assessed to provide better management and control of RCS exposure levels on construction sites.
NASA Astrophysics Data System (ADS)
Li, Yizhen; McGillicuddy, Dennis J.; Dinniman, Michael S.; Klinck, John M.
2017-02-01
Both remotely sensed and in situ observations in austral summer of early 2012 in the Ross Sea suggest the presence of cold, low-salinity, and high-biomass eddies along the edge of the Ross Ice Shelf (RIS). Satellite measurements include sea surface temperature and ocean color, and shipboard data sets include hydrographic profiles, towed instrumentation, and underway acoustic Doppler current profilers. Idealized model simulations are utilized to examine the processes responsible for ice shelf eddy formation. 3-D model simulations produce similar cold and fresh eddies, although the simulated vertical lenses are quantitatively thinner than observed. Model sensitivity tests show that both basal melting underneath the ice shelf and irregularity of the ice shelf edge facilitate generation of cold and fresh eddies. 2-D model simulations further suggest that both basal melting and downwelling-favorable winds play crucial roles in forming a thick layer of low-salinity water observed along the edge of the RIS. These properties may have been entrained into the observed eddies, whereas that entrainment process was not captured in the specific eddy formation events studied in our 3-D model-which may explain the discrepancy between the simulated and observed eddies, at least in part. Additional sensitivity experiments imply that uncertainties associated with background stratification and wind stress may also explain why the model underestimates the thickness of the low-salinity lens in the eddy interiors. Our study highlights the importance of incorporating accurate wind forcing, basal melting, and ice shelf irregularity for simulating eddy formation near the RIS edge. The processes responsible for generating the high phytoplankton biomass inside these eddies remain to be elucidated. Appendix B. Details for the basal melting and mechanical forcing by the ice shelf edge.
Schoolmaster, Donald; Stagg, Camille L.
2018-01-01
A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.
Yang, Xiaojun
2012-02-01
Exploring the quantitative association between landscape characteristics and the ecological conditions of receiving waters has recently become an emerging area for eco-environmental research. While the landscape-water relationship research has largely targeted on inland aquatic systems, there has been an increasing need to develop methods and techniques that can better work with coastal and estuarine ecosystems. In this paper, we present a geospatial approach to examine the quantitative relationship between landscape characteristics and estuarine nitrogen loading in an urban watershed. The case study site is in the Pensacola estuarine drainage area, home of the city of Pensacola, Florida, USA, where vigorous urban sprawling has prompted growing concerns on the estuarine ecological health. Central to this research is a remote sensor image that has been used to extract land use/cover information and derive landscape metrics. Several significant landscape metrics are selected and spatially linked with the nitrogen loading data for the Pensacola bay area. Landscape metrics and nitrogen loading are summarized by equal overland flow-length rings, and their association is examined by using multivariate statistical analysis. And a stepwise model-building protocol is used for regression designs to help identify significant variables that can explain much of the variance in the nitrogen loading dataset. It is found that using landscape composition or spatial configuration alone can explain most of the nitrogen loading variability. Of all the regression models using metrics derived from a single land use/cover class as the independent variables, the one from the low density urban gives the highest adjusted R-square score, suggesting the impact of the watershed-wide urban sprawl upon this sensitive estuarine ecosystem. Measures towards the reduction of non-point source pollution from urban development are necessary in the area to protect the Pensacola bay ecosystem and its ecosystem services. Copyright © 2011 Elsevier Ltd. All rights reserved.
Leone, Vanessa; Faraldo-Gómez, José D
2016-12-01
Two subunits within the transmembrane domain of the ATP synthase-the c-ring and subunit a-energize the production of 90% of cellular ATP by transducing an electrochemical gradient of H + or Na + into rotational motion. The nature of this turbine-like energy conversion mechanism has been elusive for decades, owing to the lack of definitive structural information on subunit a or its c-ring interface. In a recent breakthrough, several structures of this complex were resolved by cryo-electron microscopy (cryo-EM), but the modest resolution of the data has led to divergent interpretations. Moreover, the unexpected architecture of the complex has cast doubts on a wealth of earlier biochemical analyses conducted to probe this structure. Here, we use quantitative molecular-modeling methods to derive a structure of the a-c complex that is not only objectively consistent with the cryo-EM data, but also with correlated mutation analyses of both subunits and with prior cross-linking and cysteine accessibility measurements. This systematic, integrative approach reveals unambiguously the topology of subunit a and its relationship with the c-ring. Mapping of known Cd 2+ block sites and conserved protonatable residues onto the structure delineates two noncontiguous pathways across the complex, connecting two adjacent proton-binding sites in the c-ring to the space on either side of the membrane. The location of these binding sites and of a strictly conserved arginine on subunit a, which serves to prevent protons from hopping between them, explains the directionality of the rotary mechanism and its strict coupling to the proton-motive force. Additionally, mapping of mutations conferring resistance to oligomycin unexpectedly reveals that this prototypical inhibitor may bind to two distinct sites at the a-c interface, explaining its ability to block the mechanism of the enzyme irrespective of the direction of rotation of the c-ring. In summary, this study is a stepping stone toward establishing the mechanism of the ATP synthase at the atomic level.
Explaining worker strain and learning: how important are emotional job demands?
Taris, Toon W; Schreurs, Paul J G
2009-05-01
This study examined the added value of emotional job demands in explaining worker well-being, relative to the effects of task characteristics, such as quantitative job demands, job control, and coworker support. Emotional job demands were expected to account for an additional proportion of the variance in well-being. Cross-sectional data were obtained from 11,361 female Dutch home care employees. Hierarchical stepwise regression analysis demonstrated that low control, low support and high quantitative demands were generally associated with lower well-being (as measured in terms of emotional exhaustion, dedication, professional accomplishment and learning). Moreover, high emotional demands were in three out of four cases significantly associated with adverse well-being, in these cases accounting for an additional 1-6% of the variance in the outcome variables. In three out of eight cases the main effects of emotional demands on well-being were qualified by support and control, such that high control and high support either buffered the adverse effects of high emotional demands on well-being or increased the positive effects thereof. All in all, high emotional demands are as important a risk factor for worker well-being as well-established concepts like low job control and high quantitative job demands.
NASA Astrophysics Data System (ADS)
Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia
2014-02-01
How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.
Lee, Kam L; Ireland, Timothy A; Bernardo, Michael
2016-06-01
This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.
Blood money: Harvey's De motu cordis (1628) as an exercise in accounting.
Neuss, Michael J
2018-04-13
William Harvey's famous quantitative argument from De motu cordis (1628) about the circulation of blood explained how a small amount of blood could recirculate and nourish the entire body, upending the Galenic conception of the blood's motion. This paper argues that the quantitative argument drew on the calculative and rhetorical skills of merchants, including Harvey's own brothers. Modern translations of De motu cordis obscure the language of accountancy that Harvey himself used. Like a merchant accounting for credits and debits, intake and output, goods and moneys, Harvey treated venous and arterial blood as essentially commensurate, quantifiable and fungible. For Harvey, the circulation (and recirculation) of blood was an arithmetical necessity. The development of Harvey's circulatory model followed shifts in the epistemic value of mercantile forms of knowledge, including accounting and arithmetic, also drawing on an Aristotelian language of reciprocity and balance that Harvey shared with mercantile advisers to the royal court. This paper places Harvey's calculations in a previously underappreciated context of economic crisis, whose debates focused largely on questions of circulation.
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast
Pang, Wei; Coghill, George M.
2015-01-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377
NASA Technical Reports Server (NTRS)
Hoepffner, Nicolas; Sathyendranath, Shubha
1993-01-01
The contributions of detrital particles and phytoplankton to total light absorption are retrieved by nonlinear regression on the absorption spectra of total particles from various oceanic regions. The model used explains more than 96% of the variance in the observed particle absorption spectra. The resulting absorption spectra of phytoplankton are then decomposed into several Gaussian bands reflecting absorption by phytoplankton pigments. Such a decomposition, combined with high-performance liquid chromatography data on phytoplankton pigment concentrations, allows the computation of specific absorption coefficients for chlorophylls a, b, and c and carotenoids. The spectral values of these in vivo absorption coefficients are then discussed, considering the effects of secondary pigments which were not measured quantitatively. We show that these coefficients can be used to reconstruct the absorption spectra of phytoplankton at various locations and depths. Discrepancies that do occur at some stations are explained in terms of particle size effect. These coefficients can be used to determine the concentrations of phytoplankton pigments in the water, given the absorption spectrum of total particles.
Szabo, J.K.; Fedriani, E.M.; Segovia-Gonzalez, M. M.; Astheimer, L.B.; Hooper, M.J.
2010-01-01
This paper introduces a new technique in ecology to analyze spatial and temporal variability in environmental variables. By using simple statistics, we explore the relations between abiotic and biotic variables that influence animal distributions. However, spatial and temporal variability in rainfall, a key variable in ecological studies, can cause difficulties to any basic model including time evolution. The study was of a landscape scale (three million square kilometers in eastern Australia), mainly over the period of 19982004. We simultaneously considered qualitative spatial (soil and habitat types) and quantitative temporal (rainfall) variables in a Geographical Information System environment. In addition to some techniques commonly used in ecology, we applied a new method, Functional Principal Component Analysis, which proved to be very suitable for this case, as it explained more than 97% of the total variance of the rainfall data, providing us with substitute variables that are easier to manage and are even able to explain rainfall patterns. The main variable came from a habitat classification that showed strong correlations with rainfall values and soil types. ?? 2010 World Scientific Publishing Company.
First Test of Stochastic Growth Theory for Langmuir Waves in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.
1997-01-01
This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(logE) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(logE) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(logE) distribution is a power-law with index approximately -1; this is interpreted in terms of convolution of intrinsic, spatially varying P(logE) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.
First test of stochastic growth theory for Langmuir waves in Earth's foreshock
NASA Astrophysics Data System (ADS)
Cairns, Iver H.; Robinson, P. A.
This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(log E) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(log E) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(log E) distribution is a power-law with index ˜ -1 this is interpreted in terms of convolution of intrinsic, spatially varying P(log E) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.
History dependent crystallization of Zr41Ti14Cu12Ni10Be23 melts
NASA Astrophysics Data System (ADS)
Schroers, Jan; Johnson, William L.
2000-07-01
The crystallization of Zr41Ti14Cu12Ni10Be23 (Vit 1) melts during constant heating is investigated. (Vit 1) melts are cooled with different rates into the amorphous state and the crystallization temperature upon subsequent heating is studied. In addition, Vit 1 melts are cooled using a constant rate to different temperatures and subsequently heated from this temperature with a constant rate. We investigate the influence of the temperature to which the melt was cooled on the crystallization temperature measured upon heating. In both cases the onset temperature of crystallization shows strong history dependence. This can be explained by an accumulating process during cooling and heating. An attempt is made to consider this process in a simple model by steady state nucleation and subsequent growth of the nuclei which results in different crystallization kinetics during cooling or heating. Calculations show qualitative agreement with the experimental results. However, calculated and experimental results differ quantitatively. This difference can be explained by a decomposition process leading to a nonsteady nucleation rate which continuously increases with decreasing temperature.
Model-Based Approaches for Teaching and Practicing Personality Assessment.
Blais, Mark A; Hopwood, Christopher J
2017-01-01
Psychological assessment is a complex professional skill. Competence in assessment requires an extensive knowledge of personality, neuropsychology, social behavior, and psychopathology, a background in psychometrics, familiarity with a range of multimethod tools, cognitive flexibility, skepticism, and interpersonal sensitivity. This complexity makes assessment a challenge to teach and learn, particularly as the investment of resources and time in assessment has waned in psychological training programs over the last few decades. In this article, we describe 3 conceptual models that can assist teaching and learning psychological assessments. The transtheoretical model of personality provides a personality systems-based framework for understanding how multimethod assessment data relate to major personality systems and can be combined to describe and explain complex human behavior. The quantitative psychopathology-personality trait model is an empirical model based on the hierarchical organization of individual differences. Application of this model can help students understand diagnostic comorbidity and symptom heterogeneity, focus on more meaningful high-order domains, and identify the most effective assessment tools for addressing a given question. The interpersonal situation model is rooted in interpersonal theory and can help students connect test data to here-and-now interactions with patients. We conclude by demonstrating the utility of these models using a case example.
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Electronic bandstructure of semiconductor dilute bismide structures
NASA Astrophysics Data System (ADS)
Erucar, T.; Nutku, F.; Donmez, O.; Erol, A.
2017-02-01
In this work electronic band structure of dilute bismide GaAs/GaAs1-xBix quantum well structures with 1.8% and 3.75% bismuth compositions have been investigated both experimentally and theoretically. Photoluminescence (PL) measurements reveal that effective bandgap of the samples decreases approximately 65 meV per bismuth concentration. Temperature dependence of the effective bandgap is obtained to be higher for the sample with higher bismuth concentration. Moreover, both asymmetric characteristic at the low energy tail of the PL and full width at half maximum (FWHM) of PL peak increase with increasing bismuth composition as a result of increased Bi related defects located above valence band (VB). In order to explain composition dependence of the effective bandgap quantitatively, valence band anti-crossing (VBAC) model is used. Bismuth composition and temperature dependence of effective bandgap in a quantum well structure is modeled by solving Schrödinger equation and compared with experimental PL data.
Sensitivity of seafloor bathymetry to climate-driven fluctuations in mid-ocean ridge magma supply.
Olive, J-A; Behn, M D; Ito, G; Buck, W R; Escartín, J; Howell, S
2015-10-16
Recent studies have proposed that the bathymetric fabric of the seafloor formed at mid-ocean ridges records rapid (23,000 to 100,000 years) fluctuations in ridge magma supply caused by sealevel changes that modulate melt production in the underlying mantle. Using quantitative models of faulting and magma emplacement, we demonstrate that, in fact, seafloor-shaping processes act as a low-pass filter on variations in magma supply, strongly damping fluctuations shorter than about 100,000 years. We show that the systematic decrease in dominant seafloor wavelengths with increasing spreading rate is best explained by a model of fault growth and abandonment under a steady magma input. This provides a robust framework for deciphering the footprint of mantle melting in the fabric of abyssal hills, the most common topographic feature on Earth. Copyright © 2015, American Association for the Advancement of Science.
Rastija, Vesna; Agić, Dejan; Tomiš, Sanja; Nikolič, Sonja; Hranjec, Marijana; Grace, Karminski-Zamola; Abramić, Marija
2015-01-01
A molecular modeling study is performed on series of benzimidazol-based inhibitors of human dipeptidyl peptidase III (DPP III). An eight novel compounds were synthesized in excellent yields using green chemistry approach. This study is aimed to elucidate the structural features of benzimidazole derivatives required for antagonism of human DPP III activity using Quantitative Structure-Activity Relationship (QSAR) analysis, and to understand the mechanism of one of the most potent inhibitor binding into the active site of this enzyme, by molecular dynamics (MD) simulations. The best model obtained includes S3K and RDF045m descriptors which have explained 89.4 % of inhibitory activity. Depicted moiety for strong inhibition activity matches to the structure of most potent compound. MD simulation has revealed importance of imidazolinyl and phenyl groups in the mechanism of binding into the active site of human DPP III.
Returners and explorers dichotomy in human mobility
Pappalardo, Luca; Simini, Filippo; Rinzivillo, Salvatore; Pedreschi, Dino; Giannotti, Fosca; Barabási, Albert-László
2015-01-01
The availability of massive digital traces of human whereabouts has offered a series of novel insights on the quantitative patterns characterizing human mobility. In particular, numerous recent studies have lead to an unexpected consensus: the considerable variability in the characteristic travelled distance of individuals coexists with a high degree of predictability of their future locations. Here we shed light on this surprising coexistence by systematically investigating the impact of recurrent mobility on the characteristic distance travelled by individuals. Using both mobile phone and GPS data, we discover the existence of two distinct classes of individuals: returners and explorers. As existing models of human mobility cannot explain the existence of these two classes, we develop more realistic models able to capture the empirical findings. Finally, we show that returners and explorers play a distinct quantifiable role in spreading phenomena and that a correlation exists between their mobility patterns and social interactions. PMID:26349016
Etemadi, Omid; Yen, Teh Fu
2007-09-01
Surface properties of two different phases of alumina were studied through SEM images. Characterization of amorphous acidic alumina and crystalline boehmite by XRD explains the differences in adsorption capacities of each sample. Data from small angle neutron scattering (SANS) provide further results regarding the ordering in amorphous and crystalline samples of alumina. Quantitative measurements from SANS are used for pore size calculations. Higher disorder provides more topological traps, irregularities, and hidden grooves for higher adsorption capacity. An isotherm model was derived for adsorption of dibenzothiophene sulfone (DBTO) by amorphous acidic alumina to predict and calculate the adsorption of sulfur compounds. The Langmuir-Freundlich model covers a wide range of sulfur concentrations. Experiments prove that amorphous acidic alumina is the adsorbent of choice for selective adsorption in the ultrasound-assisted oxidative desulfurization (UAOD) process to produce ultra-low-sulfur fuel (ULSF).
Drainage investment and wetland loss: an analysis of the national resources inventory data
Douglas, Aaron J.; Johnson, Richard L.
1994-01-01
The United States Soil Conservation Service (SCS) conducts a survey for the purpose of establishing an agricultural land use database. This survey is called the National Resources Inventory (NRI) database. The complex NRI land classification system, in conjunction with the quantitative information gathered by the survey, has numerous applications. The current paper uses the wetland area data gathered by the NRI in 1982 and 1987 to examine empirically the factors that generate wetland loss in the United States. The cross-section regression models listed here use the quantity of wetlands, the stock of drainage capital, the realty value of farmland and drainage costs to explain most of the cross-state variation in wetland loss rates. Wetlands preservation efforts by federal agencies assume that pecuniary economic factors play a decisive role in wetland drainage. The empirical models tested in the present paper validate this assumption.
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
Estimation of Transformation Temperatures in Ti-Ni-Pd Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Narayana, P. L.; Kim, Seong-Woong; Hong, Jae-Keun; Reddy, N. S.; Yeom, Jong-Taek
2018-03-01
The present study focused on estimating the complex nonlinear relationship between the composition and phase transformation temperatures of Ti-Ni-Pd shape memory alloys by artificial neural networks (ANN). The ANN models were developed by using the experimental data of Ti-Ni-Pd alloys. It was found that the predictions are in good agreement with the trained and unseen test data of existing alloys. The developed model was able to simulate new virtual alloys to quantitatively estimate the effect of Ti, Ni, and Pd on transformation temperatures. The transformation temperature behavior of these virtual alloys is validated by conducting new experiments on the Ti-rich thin film that was deposited using multi target sputtering equipment. The transformation behavior of the film was measured by varying the composition with the help of aging treatment. The predicted trend of transformational temperatures was explained with the help of experimental results.
Effect of salt entropy on protein solubility and Hofmeister series
NASA Astrophysics Data System (ADS)
Dahal, Yuba; Schmit, Jeremy
We present a theory of salt effects on protein solubility that accounts for salting-in, salting-out, and the Hofmeister series. We represent protein charge by the first order multipole expansion to include attractive and repulsive electrostatic interactions in the model. Our model also includes non-electrostatic protein-ion interactions, and ion-solvent interactions via an effective solvated ion radius. We find that the finite size of the ions has significant effects on the translational entropy of the salt, which accounts for the changes in the protein solubility. At low salt the dominant effect comes from the entropic cost of confining ions within the aggregate. At high concentrations the salt drives a depletion attraction that favors aggregation. Our theory explains the reversal in the Hofmeister series observed in lysozyme cloud point measurements and semi-quantitatively describes the solubility of lysozyme and chymosin crystals.
NASA Astrophysics Data System (ADS)
Takakura, T.; Yanagi, I.; Goto, Y.; Ishige, Y.; Kohara, Y.
2016-03-01
We developed a resistive-pulse sensor with a solid-state pore and measured the latex agglutination of submicron particles induced by antigen-antibody interaction for single-molecule detection of proteins. We fabricated the pore based on numerical simulation to clearly distinguish between monomer and dimer latex particles. By measuring single dimers agglutinated in the single-molecule regime, we detected single human alpha-fetoprotein molecules. Adjusting the initial particle concentration improves the limit of detection (LOD) to 95 fmol/l. We established a theoretical model of the LOD by combining the reaction kinetics and the counting statistics to explain the effect of initial particle concentration on the LOD. The theoretical model shows how to improve the LOD quantitatively. The single-molecule detection studied here indicates the feasibility of implementing a highly sensitive immunoassay by a simple measurement method using resistive-pulse sensing.
Den Hartog, Emiel A; Havenith, George
2010-01-01
For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.
Farisco, Michele; Kotaleski, Jeanette H; Evers, Kathinka
2018-01-01
Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.
Can increasing carbon dioxide cause climate change?
Lindzen, Richard S.
1997-01-01
The realistic physical functioning of the greenhouse effect is reviewed, and the role of dynamic transport and water vapor is identified. Model errors and uncertainties are quantitatively compared with the forcing due to doubling CO2, and they are shown to be too large for reliable model evaluations of climate sensitivities. The possibility of directly measuring climate sensitivity is reviewed. A direct approach using satellite data to relate changes in globally averaged radiative flux changes at the top of the atmosphere to naturally occurring changes in global mean temperature is described. Indirect approaches to evaluating climate sensitivity involving the response to volcanic eruptions and Eocene climate change are also described. Finally, it is explained how, in principle, a climate that is insensitive to gross radiative forcing as produced by doubling CO2 might still be able to undergo major changes of the sort associated with ice ages and equable climates. PMID:11607742
Theory and computation of hot carriers generated by surface plasmon polaritons in noble metals
Bernardi, Marco; Mustafa, Jamal; Neaton, Jeffrey B.; Louie, Steven G.
2015-01-01
Hot carriers (HC) generated by surface plasmon polaritons (SPPs) in noble metals are promising for application in optoelectronics, plasmonics and renewable energy. However, existing models fail to explain key quantitative details of SPP-to-HC conversion experiments. Here we develop a quantum mechanical framework and apply first-principles calculations to study the energy distribution and scattering processes of HCs generated by SPPs in Au and Ag. We find that the relative positions of the s and d bands of noble metals regulate the energy distribution and mean free path of the HCs, and that the electron–phonon interaction controls HC energy loss and transport. Our results prescribe optimal conditions for HC generation and extraction, and invalidate previously employed free-electron-like models. Our work combines density functional theory, GW and electron–phonon calculations to provide microscopic insight into HC generation and ultrafast dynamics in noble metals. PMID:26033445
Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues
Farisco, Michele; Kotaleski, Jeanette H.; Evers, Kathinka
2018-01-01
Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs. PMID:29740372
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. © 2013 Cognitive Science Society, Inc.
Sumi, A; Luo, T; Zhou, D; Yu, B; Kong, D; Kobayashi, N
2013-05-01
Viral hepatitis is recognized as one of the most frequently reported diseases, and especially in China, acute and chronic liver disease due to viral hepatitis has been a major public health problem. The present study aimed to analyse and predict surveillance data of infections of hepatitis A, B, C and E in Wuhan, China, by the method of time-series analysis (MemCalc, Suwa-Trast, Japan). On the basis of spectral analysis, fundamental modes explaining the underlying variation of the data for the years 2004-2008 were assigned. The model was calculated using the fundamental modes and the underlying variation of the data reproduced well. An extension of the model to the year 2009 could predict the data quantitatively. Our study suggests that the present method will allow us to model the temporal pattern of epidemics of viral hepatitis much more effectively than using the artificial neural network, which has been used previously.
Solar Effects on Global Climate Due to Cosmic Rays and Solar Energetic Particles
NASA Technical Reports Server (NTRS)
Turco, R. P.; Raeder, J.; DAuria, R.
2005-01-01
Although the work reported here does not directly connect solar variability with global climate change, this research establishes a plausible quantitative causative link between observed solar activity and apparently correlated variations in terrestrial climate parameters. Specifically, we have demonstrated that ion-mediated nucleation of atmospheric particles is a likely, and likely widespread, phenomenon that relates solar variability to changes in the microphysical properties of clouds. To investigate this relationship, we have constructed and applied a new model describing the formation and evolution of ionic clusters under a range of atmospheric conditions throughout the lower atmosphere. The activation of large ionic clusters into cloud nuclei is predicted to be favorable in the upper troposphere and mesosphere, and possibly in the lower stratosphere. The model developed under this grant needs to be extended to include additional cluster families, and should be incorporated into microphysical models to further test the cause-and-effect linkages that may ultimately explain key aspects of the connections between solar variability and climate.
NASA Astrophysics Data System (ADS)
Nam, Chunghee; Jang, Youngman; Lee, Ki-Su; Shim, Jungjin; Cho, B. K.
2006-04-01
Based upon a bulk scattering model, we investigated the variation of giant magnetoresistance (GMR) behavior after thermal annealing at Ta=250 °C as a function of the top free layer thickness of a GMR spin valve with nano-oxide layers (NOLs). It was found that the enhancement of GMR ratio after thermal annealing is explained qualitatively in terms of the increase of active GMR region in the free layer and, simultaneously, the increase of intrinsic spin-scattering ratio. These effects are likely due to the improved specular reflection at the well-formed interface of NOL. Furthermore, we developed a modified phenomenological model for sheet conductance change (ΔG) in terms of the top free layer thickness. This modified model was found to be useful in the quantitative analysis of the variation of the active GMR region and the intrinsic spin-scattering properties. The two physical parameters were found to change consistently with the effects of thermal annealing on NOL.
Wildlife in the cloud: a new approach for engaging stakeholders in wildlife management.
Chapron, Guillaume
2015-11-01
Research in wildlife management increasingly relies on quantitative population models. However, a remaining challenge is to have end-users, who are often alienated by mathematics, benefiting from this research. I propose a new approach, 'wildlife in the cloud,' to enable active learning by practitioners from cloud-based ecological models whose complexity remains invisible to the user. I argue that this concept carries the potential to overcome limitations of desktop-based software and allows new understandings of human-wildlife systems. This concept is illustrated by presenting an online decision-support tool for moose management in areas with predators in Sweden. The tool takes the form of a user-friendly cloud-app through which users can compare the effects of alternative management decisions, and may feed into adjustment of their hunting strategy. I explain how the dynamic nature of cloud-apps opens the door to different ways of learning, informed by ecological models that can benefit both users and researchers.
Rationalizing the light-induced phase separation of mixed halide organic–inorganic perovskites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon
Mixed halide hybrid perovskites, CH 3NH 3Pb(I 1-xBrx) 3' represent good candidates for lowcost, high efficiency photovoltaic, and light-emitting devices. Their band gaps can be tuned from 1.6 to 2.3 eV, by changing the halide anion identity. Unfortunately, mixed halide perovskites undergo phase separation under illumination. This leads to iodide- and bromide-rich domains along with corresponding changes to the material’s optical/electrical response. Here, using combined spectroscopic measurements and theoretical modeling, we quantitatively rationalize all microscopic processes that occur during phase separation. Our model suggests that the driving force behind phase separation is the bandgap reduction of iodiderich phases. It additionallymore » explains observed non-linear intensity dependencies, as well as self-limited growth of iodide-rich domains. Most importantly, our model reveals that mixed halide perovskites can be stabilized against phase separation by deliberately engineering carrier diffusion lengths and injected carrier densities.« less
Rationalizing the light-induced phase separation of mixed halide organic–inorganic perovskites
Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon; ...
2017-08-04
Mixed halide hybrid perovskites, CH 3NH 3Pb(I 1-xBrx) 3' represent good candidates for lowcost, high efficiency photovoltaic, and light-emitting devices. Their band gaps can be tuned from 1.6 to 2.3 eV, by changing the halide anion identity. Unfortunately, mixed halide perovskites undergo phase separation under illumination. This leads to iodide- and bromide-rich domains along with corresponding changes to the material’s optical/electrical response. Here, using combined spectroscopic measurements and theoretical modeling, we quantitatively rationalize all microscopic processes that occur during phase separation. Our model suggests that the driving force behind phase separation is the bandgap reduction of iodiderich phases. It additionallymore » explains observed non-linear intensity dependencies, as well as self-limited growth of iodide-rich domains. Most importantly, our model reveals that mixed halide perovskites can be stabilized against phase separation by deliberately engineering carrier diffusion lengths and injected carrier densities.« less
Experiments and theory of undulatory locomotion in a simple structured medium
Majmudar, Trushant; Keaveny, Eric E.; Zhang, Jun; Shelley, Michael J.
2012-01-01
Undulatory locomotion of micro-organisms through geometrically complex, fluidic environments is ubiquitous in nature and requires the organism to negotiate both hydrodynamic effects and geometrical constraints. To understand locomotion through such media, we experimentally investigate swimming of the nematode Caenorhabditis elegans through fluid-filled arrays of micro-pillars and conduct numerical simulations based on a mechanical model of the worm that incorporates hydrodynamic and contact interactions with the lattice. We show that the nematode's path, speed and gait are significantly altered by the presence of the obstacles and depend strongly on lattice spacing. These changes and their dependence on lattice spacing are captured, both qualitatively and quantitatively, by our purely mechanical model. Using the model, we demonstrate that purely mechanical interactions between the swimmer and obstacles can produce complex trajectories, gait changes and velocity fluctuations, yielding some of the life-like dynamics exhibited by the real nematode. Our results show that mechanics, rather than biological sensing and behaviour, can explain some of the observed changes in the worm's locomotory dynamics. PMID:22319110
Computational models of spatial updating in peri-saccadic perception
Hamker, Fred H.; Zirnsak, Marc; Ziesche, Arnold; Lappe, Markus
2011-01-01
Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades. PMID:21242143
An introduction to autonomous control systems
NASA Technical Reports Server (NTRS)
Antsaklis, Panos J.; Passino, Kevin M.; Wang, S. J.
1991-01-01
The functions, characteristics, and benefits of autonomous control are outlined. An autonomous control functional architecture for future space vehicles that incorporates the concepts and characteristics described is presented. The controller is hierarchical, with an execution level (the lowest level), coordination level (middle level), and management and organization level (highest level). The general characteristics of the overall architecture, including those of the three levels, are explained, and an example to illustrate their functions is given. Mathematical models for autonomous systems, including 'logical' discrete event system models, are discussed. An approach to the quantitative, systematic modeling, analysis, and design of autonomous controllers is also discussed. It is a hybrid approach since it uses conventional analysis techniques based on difference and differential equations and new techniques for the analysis of the systems described with a symbolic formalism such as finite automata. Some recent results from the areas of planning and expert systems, machine learning, artificial neural networks, and the area restructurable controls are briefly outlined.
Anomalous temperature dependence of layer spacing of de Vries liquid crystals: Compensation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkel, K.; Kocot, A.; Vij, J. K., E-mail: jvij@tcd.ie
Smectic liquid crystals that exhibit temperature independent layer thickness offer technological advantages for their use in displays and photonic devices. The dependence of the layer spacing in SmA and SmC phases of de Vries liquid crystals is found to exhibit distinct features. On entering the SmC phase, the layer thickness initially decreases below SmA to SmC (T{sub A–C}) transition temperature but increases anomalously with reducing temperature despite the molecular tilt increasing. This anomalous observation is being explained quantitatively. Results of IR spectroscopy show that layer shrinkage is caused by tilt of the mesogen's rigid core, whereas the expansion is causedmore » by the chains getting more ordered with reducing temperature. This mutual compensation arising from molecular fragments contributing to the layer thickness differs from the previous models. The orientational order parameter of the rigid core of the mesogen provides direct evidence for de Vries cone model in the SmA phase for the two compounds investigated.« less
Reassessing Pliocene temperature gradients
NASA Astrophysics Data System (ADS)
Tierney, J. E.
2017-12-01
With CO2 levels similar to present, the Pliocene Warm Period (PWP) is one of our best analogs for climate change in the near future. Temperature proxy data from the PWP describe dramatically reduced zonal and meridional temperature gradients that have proved difficult to reproduce with climate model simulations. Recently, debate has emerged regarding the interpretation of the proxies used to infer Pliocene temperature gradients; these interpretations affect the magnitude of inferred change and the degree of inconsistency with existing climate model simulations of the PWP. Here, I revisit the issue using Bayesian proxy forward modeling and prediction that propagates known uncertainties in the Mg/Ca, UK'37, and TEX86 proxy systems. These new spatiotemporal predictions are quantitatively compared to PWP simulations to assess probabilistic agreement. Results show generally good agreement between existing Pliocene simulations from the PlioMIP ensemble and SST proxy data, suggesting that exotic changes in the ocean-atmosphere are not needed to explain the Pliocene climate state. Rather, the spatial changes in SST during the Pliocene are largely consistent with elevated CO2 forcing.
Construction of a Simple Actograph
ERIC Educational Resources Information Center
Quackenbush, Roger E.
1977-01-01
Diagrams and explains construction of an actograph which quantitatively records daily movements of animals for a 24-hour period. Combines use of a kymograph and the teetering box of Palmer. Biorhythm activities with fiddler crabs, cockroaches, and hamsters are suggested. (CS)
Detection and Enumeration of Microorganisms in Water and Waste Waters.
ERIC Educational Resources Information Center
Andrews, S.
1980-01-01
Presents and explains analytical procedures for the detection and quantitative enumeration of bacteria which constitute or are indicators of fecal contamination of aquatic environments. Tests are given for Escherichia coli, fecal Streptococci, Clostridium perfringens, and Salmonella. (WB)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-25
... ask the client to evaluate the U.S. Commercial Service on its customer service provision. Results from... will enrich the quantitative survey data by providing insights and a descriptive context to explain the...
Mavar-Haramija, Marija; Prats-Galino, Alberto; Méndez, Juan A Juanes; Puigdelívoll-Sánchez, Anna; de Notaris, Matteo
2015-10-01
A three-dimensional (3D) model of the skull base was reconstructed from the pre- and post-dissection head CT images and embedded in a Portable Document Format (PDF) file, which can be opened by freely available software and used offline. The CT images were segmented using a specific 3D software platform for biomedical data, and the resulting 3D geometrical models of anatomical structures were used for dual purpose: to simulate the extended endoscopic endonasal transsphenoidal approaches and to perform the quantitative analysis of the procedures. The analysis consisted of bone removal quantification and the calculation of quantitative parameters (surgical freedom and exposure area) of each procedure. The results are presented in three PDF documents containing JavaScript-based functions. The 3D-PDF files include reconstructions of the nasal structures (nasal septum, vomer, middle turbinates), the bony structures of the anterior skull base and maxillofacial region and partial reconstructions of the optic nerve, the hypoglossal and vidian canals and the internal carotid arteries. Alongside the anatomical model, axial, sagittal and coronal CT images are shown. Interactive 3D presentations were created to explain the surgery and the associated quantification methods step-by-step. The resulting 3D-PDF files allow the user to interact with the model through easily available software, free of charge and in an intuitive manner. The files are available for offline use on a personal computer and no previous specialized knowledge in informatics is required. The documents can be downloaded at http://hdl.handle.net/2445/55224 .
The color of the Martian sky and its influence on the illumination of the Martian surface
Thomas, N.; Markiewicz, W.J.; Sablotny, R.M.; Wuttke, M.W.; Keller, H.U.; Johnson, J. R.; Reid, R.J.; Smith, R.H.
1999-01-01
The dust in the atmosphere above the Mars Pathfinder landing site produced a bright, red sky that increases in redness toward the horizon at midday. There is also evidence for an absorption band in the scattered light from the sky at 860 nm. A model of the sky brightness has been developed [Markiewicz et al., this issue] and tested against Imager for Mars Pathfinder (IMP) observations of calibration targets on the lander. The resulting model has been used to quantify the total diffuse flux onto a surface parallel to the local level for several solar elevation angles and optical depths. The model shows that the diffuse illumination in shadowed areas is strongly reddened while areas illuminated directly by the Sun (and the blue forward scattering peak) see a more solar-type spectrum, in agreement with Viking and IMP observations. Quantitative corrections for the reddening in shadowed areas are demonstrated. It is shown quantitatively that the unusual appearance of the rock Yogi (the east face of which appeared relatively blue in images taken during the morning but relatively red during the afternoon) can be explained purely by the changing illumination geometry. We conclude that any spectrophotometric analysis of surfaces on Mars must take into account the diffuse flux. Specifically, the reflectances of surfaces viewed under different illumination geometries cannot be investigated for spectral diversity unless a correction has been applied which removes the influence of the reddened diffuse flux. Copyright 1999 by the American Geophysical Union.
Holmes, N G; Wieman, Carl E; Bonn, D A
2015-09-08
The ability to make decisions based on data, with its inherent uncertainties and variability, is a complex and vital skill in the modern world. The need for such quantitative critical thinking occurs in many different contexts, and although it is an important goal of education, that goal is seldom being achieved. We argue that the key element for developing this ability is repeated practice in making decisions based on data, with feedback on those decisions. We demonstrate a structure for providing suitable practice that can be applied in any instructional setting that involves the acquisition of data and relating that data to scientific models. This study reports the results of applying that structure in an introductory physics laboratory course. Students in an experimental condition were repeatedly instructed to make and act on quantitative comparisons between datasets, and between data and models, an approach that is common to all science disciplines. These instructions were slowly faded across the course. After the instructions had been removed, students in the experimental condition were 12 times more likely to spontaneously propose or make changes to improve their experimental methods than a control group, who performed traditional experimental activities. The students in the experimental condition were also four times more likely to identify and explain a limitation of a physical model using their data. Students in the experimental condition also showed much more sophisticated reasoning about their data. These differences between the groups were seen to persist into a subsequent course taken the following year.
X-ray microanalysis of porous materials using Monte Carlo simulations.
Poirier, Dominique; Gauvin, Raynald
2011-01-01
Quantitative X-ray microanalysis models, such as ZAF or φ(ρz) methods, are normally based on solid, flat-polished specimens. This limits their use in various domains where porous materials are studied, such as powder metallurgy, catalysts, foams, etc. Previous experimental studies have shown that an increase in porosity leads to a deficit in X-ray emission for various materials, such as graphite, Cr(2) O(3) , CuO, ZnS (Ichinokawa et al., '69), Al(2) O(3) , and Ag (Lakis et al., '92). However, the mechanisms responsible for this decrease are unclear. The porosity by itself does not explain the loss in intensity, other mechanisms have therefore been proposed, such as extra energy loss by the diffusion of electrons by surface plasmons generated at the pores-solid interfaces, surface roughness, extra charging at the pores-solid interface, or carbon diffusion in the pores. However, the exact mechanism is still unclear. In order to better understand the effects of porosity on quantitative microanalysis, a new approach using Monte Carlo simulations was developed by Gauvin (2005) using a constant pore size. In this new study, the X-ray emissions model was modified to include a random log normal distribution of pores size in the simulated materials. This article presents, after a literature review of the previous works performed about X-ray microanalysis of porous materials, some of the results obtained with Gauvin's modified model. They are then compared with experimental results. Copyright © 2011 Wiley Periodicals, Inc.
Holmes, N. G.; Wieman, Carl E.; Bonn, D. A.
2015-01-01
The ability to make decisions based on data, with its inherent uncertainties and variability, is a complex and vital skill in the modern world. The need for such quantitative critical thinking occurs in many different contexts, and although it is an important goal of education, that goal is seldom being achieved. We argue that the key element for developing this ability is repeated practice in making decisions based on data, with feedback on those decisions. We demonstrate a structure for providing suitable practice that can be applied in any instructional setting that involves the acquisition of data and relating that data to scientific models. This study reports the results of applying that structure in an introductory physics laboratory course. Students in an experimental condition were repeatedly instructed to make and act on quantitative comparisons between datasets, and between data and models, an approach that is common to all science disciplines. These instructions were slowly faded across the course. After the instructions had been removed, students in the experimental condition were 12 times more likely to spontaneously propose or make changes to improve their experimental methods than a control group, who performed traditional experimental activities. The students in the experimental condition were also four times more likely to identify and explain a limitation of a physical model using their data. Students in the experimental condition also showed much more sophisticated reasoning about their data. These differences between the groups were seen to persist into a subsequent course taken the following year. PMID:26283351
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.
Pang, Wei; Coghill, George M
2015-05-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.