ERIC Educational Resources Information Center
Dunlop, David Livingston
The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…
ERIC Educational Resources Information Center
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
ERIC Educational Resources Information Center
LoPresto, Michael C.
2014-01-01
What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…
Development of Nomarski microscopy for quantitative determination of surface topography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J. S.; Gordon, R. L.; Lessor, D. L.
1979-01-01
The use of Nomarski differential interference contrast (DIC) microscopy has been extended to provide nondestructive, quantitative analysis of a sample's surface topography. Theoretical modeling has determined the dependence of the image intensity on the microscope's optical components, the sample's optical properties, and the sample's surface orientation relative to the microscope. Results include expressions to allow the inversion of image intensity data to determine sample surface slopes. A commercial Nomarski system has been modified and characterized to allow the evaluation of the optical model. Data have been recorded with smooth, planar samples that verify the theoretical predictions.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.
ERIC Educational Resources Information Center
Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith
2002-01-01
Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)
Multi-scale Modeling of Chromosomal DNA in Living Cells
NASA Astrophysics Data System (ADS)
Spakowitz, Andrew
The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).
Interface Pattern Selection in Directional Solidification
NASA Technical Reports Server (NTRS)
Trivedi, Rohit; Tewari, Surendra N.
2001-01-01
The central focus of this research is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. Ground-based studies have established that the conditions under which cellular and dendritic microstructures form are precisely where convection effects are dominant in bulk samples. Thus, experimental data can not be obtained terrestrially under pure diffusive regime. Furthermore, reliable theoretical models are not yet possible which can quantitatively incorporate fluid flow in the pattern selection criterion. Consequently, microgravity experiments on cellular and dendritic growth are designed to obtain benchmark data under diffusive growth conditions that can be quantitatively analyzed and compared with the rigorous theoretical model to establish the fundamental principles that govern the selection of specific microstructure and its length scales. In the cellular structure, different cells in an array are strongly coupled so that the cellular pattern evolution is controlled by complex interactions between thermal diffusion, solute diffusion and interface effects. These interactions give infinity of solutions, and the system selects only a narrow band of solutions. The aim of this investigation is to obtain benchmark data and develop a rigorous theoretical model that will allow us to quantitatively establish the physics of this selection process.
QTest: Quantitative Testing of Theories of Binary Choice.
Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.
Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G
2018-06-01
Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Coupling biology and oceanography in models.
Fennel, W; Neumann, T
2001-08-01
The dynamics of marine ecosystems, i.e. the changes of observable chemical-biological quantities in space and time, are driven by biological and physical processes. Predictions of future developments of marine systems need a theoretical framework, i.e. models, solidly based on research and understanding of the different processes involved. The natural way to describe marine systems theoretically seems to be the embedding of chemical-biological models into circulation models. However, while circulation models are relatively advanced the quantitative theoretical description of chemical-biological processes lags behind. This paper discusses some of the approaches and problems in the development of consistent theories and indicates the beneficial potential of the coupling of marine biology and oceanography in models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn; Jin, Ke
2016-04-14
Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and themore » approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.« less
Fitness to work of astronauts in conditions of action of the extreme emotional factors
NASA Astrophysics Data System (ADS)
Prisniakova, L. M.
2004-01-01
The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection.
Fitness to work of astronauts in conditions of action of the extreme emotional factors.
Prisniakova, L M
2004-01-01
The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection. Published by Elsevier Ltd on behalf of COSPAR.
Noninvasive identification of the total peripheral resistance baroreflex
NASA Technical Reports Server (NTRS)
Mukkamala, Ramakrishna; Toska, Karin; Cohen, Richard J.
2003-01-01
We propose two identification algorithms for quantitating the total peripheral resistance (TPR) baroreflex, an important contributor to short-term arterial blood pressure (ABP) regulation. Each algorithm analyzes beat-to-beat fluctuations in ABP and cardiac output, which may both be obtained noninvasively in humans. For a theoretical evaluation, we applied both algorithms to a realistic cardiovascular model. The results contrasted with only one of the algorithms proving to be reliable. This algorithm was able to track changes in the static gains of both the arterial and cardiopulmonary TPR baroreflex. We then applied both algorithms to a preliminary set of human data and obtained contrasting results much like those obtained from the cardiovascular model, thereby making the theoretical evaluation results more meaningful. This study suggests that, with experimental testing, the reliable identification algorithm may provide a powerful, noninvasive means for quantitating the TPR baroreflex. This study also provides an example of the role that models can play in the development and initial evaluation of algorithms aimed at quantitating important physiological mechanisms.
An overview of quantitative approaches in Gestalt perception.
Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H
2016-09-01
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Collegiate Grading Practices and the Gender Pay Gap.
ERIC Educational Resources Information Center
Dowd, Alicia C.
2000-01-01
Presents a theoretical analysis showing that relatively low grading quantitative fields and high grading verbal fields create a disincentive for college women to invest in quantitative study. Extends research by R. Sabot and J. Wakeman-Linn. Models pressures on grading practices using higher education production functions. (Author/SLD)
ERIC Educational Resources Information Center
Shandra, John M.; Nobles, Jenna E.; London, Bruce; Williamson, John B.
2005-01-01
This study presents quantitative, sociological models designed to account for cross-national variation in child mortality. We consider variables linked to five different theoretical perspectives that include the economic modernization, social modernization, political modernization, ecological-evolutionary, and dependency perspectives. The study is…
NASA Technical Reports Server (NTRS)
Glassgold, Alfred E.; Huggins, Patrick J.
1987-01-01
The study of the outer envelopes of cool evolved stars has become an active area of research. The physical properties of CS envelopes are presented. Observations of many wavelengths bands are relevant. A summary of observations and a discussion of theoretical considerations concerning the chemistry are summarized. Recent theoretical considerations show that the thermal equilibrium model is of limited use for understanding the chemistry of the outer CS envelopes. The theoretical modeling of the chemistry of CS envelopes provides a quantitive test of chemical concepts which have a broader interest than the envelopes themselves.
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
2014-09-01
What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.
QTest: Quantitative Testing of Theories of Binary Choice
Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495
Quantitative confirmation of diffusion-limited oxidation theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, K.T.; Clough, R.L.
1990-01-01
Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less
NASA Astrophysics Data System (ADS)
Reineker, P.; Kenkre, V. M.; Kühne, R.
1981-08-01
A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.
ERIC Educational Resources Information Center
Castillo, Alan F.
2014-01-01
The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…
NASA Technical Reports Server (NTRS)
Fu, L. S.
1980-01-01
The three main topics covered are: (1) fracture toughness and microstructure, (2) quantitative ultrasonic and microstructure; and (3) scattering and related mathematical methods. Literature in these areas is reviewed to give insight to the search of a theoretical foundation for quantitative ultrasonic measurement of fracture toughness. The literature review shows that fracture toughness is inherently related to the microstructure and in particular, it depends upon the spacing of inclusions or second particles and the aspect ratio of second phase particles. There are indications that ultrasonic velocity attenuation measurements can be used to determine fracture toughness. The leads to a review of the mathematical models available in solving boundary value problems related to microstructural factors that govern facture toughness and wave motion. A framework towards the theoretical study for the quantitative determination of fracture toughness is described and suggestions for future research are proposed.
Classical and quantum magnetism in giant Keplerate magnetic molecules.
Müller, A; Luban, M; Schröder, C; Modler, R; Kögerler, P; Axenovich, M; Schnack, J; Canfield, P; Bud'ko, S; Harrison, N
2001-09-17
Complementary theoretical modeling methods are presented for the classical and quantum Heisenberg model to explain the magnetic properties of nanometer-sized magnetic molecules. Excellent quantitative agreement is achieved between our experimental data down to 0.1 K and for fields up to 60 Tesla and our theoretical results for the giant Keplerate species {Mo72Fe30}, by far the largest paramagnetic molecule synthesized to date. © 2001 WILEY-VCH Verlag GmbH, Weinheim, Fed. Rep. of Germany.
ERIC Educational Resources Information Center
Castro-Villarreal, Felicia; Guerra, Norma; Sass, Daniel; Hseih, Pei-Hsuan
2014-01-01
Theoretical models were tested using structural equation modeling to evaluate the interrelations among cognitive motivational variables and academic achievement using a sample of 128 predominately Hispanic pre-service teachers enrolled in two undergraduate educational psychology classes. Data were gathered using: (1) a quantitative questionnaire…
College Students Solving Chemistry Problems: A Theoretical Model of Expertise
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Glynn, Shawn M.
2009-01-01
A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…
Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.
Towards a neuro-computational account of prism adaptation.
Petitet, Pierre; O'Reilly, Jill X; O'Shea, Jacinta
2017-12-14
Prism adaptation has a long history as an experimental paradigm used to investigate the functional and neural processes that underlie sensorimotor control. In the neuropsychology literature, prism adaptation behaviour is typically explained by reference to a traditional cognitive psychology framework that distinguishes putative functions, such as 'strategic control' versus 'spatial realignment'. This theoretical framework lacks conceptual clarity, quantitative precision and explanatory power. Here, we advocate for an alternative computational framework that offers several advantages: 1) an algorithmic explanatory account of the computations and operations that drive behaviour; 2) expressed in quantitative mathematical terms; 3) embedded within a principled theoretical framework (Bayesian decision theory, state-space modelling); 4) that offers a means to generate and test quantitative behavioural predictions. This computational framework offers a route towards mechanistic neurocognitive explanations of prism adaptation behaviour. Thus it constitutes a conceptual advance compared to the traditional theoretical framework. In this paper, we illustrate how Bayesian decision theory and state-space models offer principled explanations for a range of behavioural phenomena in the field of prism adaptation (e.g. visual capture, magnitude of visual versus proprioceptive realignment, spontaneous recovery and dynamics of adaptation memory). We argue that this explanatory framework can advance understanding of the functional and neural mechanisms that implement prism adaptation behaviour, by enabling quantitative tests of hypotheses that go beyond merely descriptive mapping claims that 'brain area X is (somehow) involved in psychological process Y'. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies
Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A
2013-01-01
A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026
1987-10-01
durability test at 800 C, 95% r.h. 71 SEM photomicrograph at 1600 x of E-8385 film spun coat . from a 2 wt% solution onto a ferrotype plate. .I 72 Theoretical ...TiO2 to the high energy side. While Auger line shapes theoretically yield oxidation state information, stoichiometry conclusions from experi- 0 mental...the justification for the methods chosen in this work. ,*p-* ., Fadley et al. [37] present a detailed theoretical discussion on quantitative XPS
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
Experimental Control of Simple Pendulum Model
ERIC Educational Resources Information Center
Medina, C.
2004-01-01
This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…
Guo, Yujie; Shen, Jie; Ye, Xuchun; Chen, Huali; Jiang, Anli
2013-08-01
This paper aims to report the design and test the effectiveness of an innovative caring teaching model based on the theoretical framework of caring in the Chinese context. Since the 1970's, caring has been a core value in nursing education. In a previous study, a theoretical framework of caring in the Chinese context is explored employing a grounded theory study, considered beneficial for caring education. A caring teaching model was designed theoretically and a one group pre- and post-test quasi-experimental study was administered to test its effectiveness. From Oct, 2009 to Jul, 2010, a cohort of grade-2 undergraduate nursing students (n=64) in a Chinese medical school was recruited to participate in the study. Data were gathered through quantitative and qualitative methods to evaluate the effectiveness of the caring teaching model. The caring teaching model created an esthetic situation and experiential learning style for teaching caring that was integrated within the curricula. Quantitative data from the quasi-experimental study showed that the post-test scores of each item were higher than those on the pre-test (p<0.01). Thematic analysis of 1220 narratives from students' caring journals and reports of participant class observation revealed two main thematic categories, which reflected, from the students' points of view, the development of student caring character and the impact that the caring teaching model had on this regard. The model could be used as an integrated approach to teach caring in nursing curricula. It would also be beneficial for nursing administrators in cultivating caring nurse practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Quantitative Approach to Assessing System Evolvability
NASA Technical Reports Server (NTRS)
Christian, John A., III
2004-01-01
When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.
For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict th...
Toward Validation of the Genius Discipline-Specific Literacy Model
ERIC Educational Resources Information Center
Ellis, Edwin S.; Wills, Stephen; Deshler, Donald D.
2011-01-01
An analysis of the rationale and theoretical foundations of the Genius Discipline-specific Literacy Model and its use of SMARTvisuals to cue information-processing skills and strategies and focus attention on essential informational elements in high-frequency topics in history and the English language arts are presented. Quantitative data…
Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French
2007-03-01
A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.
Ezard, Thomas H.G.; Jørgensen, Peter S.; Zimmerman, Naupaka; Chamberlain, Scott; Salguero-Gómez, Roberto; Curran, Timothy J.; Poisot, Timothée
2014-01-01
Proficiency in mathematics and statistics is essential to modern ecological science, yet few studies have assessed the level of quantitative training received by ecologists. To do so, we conducted an online survey. The 937 respondents were mostly early-career scientists who studied biology as undergraduates. We found a clear self-perceived lack of quantitative training: 75% were not satisfied with their understanding of mathematical models; 75% felt that the level of mathematics was “too low” in their ecology classes; 90% wanted more mathematics classes for ecologists; and 95% more statistics classes. Respondents thought that 30% of classes in ecology-related degrees should be focused on quantitative disciplines, which is likely higher than for most existing programs. The main suggestion to improve quantitative training was to relate theoretical and statistical modeling to applied ecological problems. Improving quantitative training will require dedicated, quantitative classes for ecology-related degrees that contain good mathematical and statistical practice. PMID:24688862
Balaev, Mikhail
2014-07-01
The author examines how time delayed effects of economic development, education, and gender equality influence political democracy. Literature review shows inadequate understanding of lagged effects, which raises methodological and theoretical issues with the current quantitative studies of democracy. Using country-years as a unit of analysis, the author estimates a series of OLS PCSE models for each predictor with a systematic analysis of the distributions of the lagged effects. The second set of multiple OLS PCSE regressions are estimated including all three independent variables. The results show that economic development, education, and gender have three unique trajectories of the time-delayed effects: Economic development has long-term effects, education produces continuous effects regardless of the timing, and gender equality has the most prominent immediate and short term effects. The results call for the reassessment of model specifications and theoretical setups in the quantitative studies of democracy. Copyright © 2014 Elsevier Inc. All rights reserved.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-01-01
Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
NASA Astrophysics Data System (ADS)
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-01
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-26
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
A quantitative description for efficient financial markets
NASA Astrophysics Data System (ADS)
Immonen, Eero
2015-09-01
In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.
The interaction of moderately strong shock waves with thick perforated walls of low porosity
NASA Technical Reports Server (NTRS)
Grant, D. J.
1972-01-01
A theoretical prediction is given of the flow through thick perforated walls of low porosity resulting from the impingement of a moderately strong traveling shock wave. The model was a flat plate positioned normal to the direction of the flow. Holes bored in the plate parallel to the direction of the flow provided nominal hole length-to-diameter ratios of 10:1 and an axial porosity of 25 percent of the flow channel cross section. The flow field behind the reflected shock wave was assumed to behave as a reservoir producing a quasi-steady duct flow through the model. Rayleigh and Fanno duct flow theoretical computations for each of three possible auxiliary wave patterns that can be associated with the transmitted shock (to satisfy contact surface compatibility) were used to provide bounding solutions as an alternative to the more complex influence coefficients method. Qualitative and quantitative behavior was verified in a 1.5- by 2.0-in. helium shock tube. High speed Schlieren photography, piezoelectric pressure-time histories, and electronic-counter wave speed measurements were used to assess the extent of correlation with the theoretical flow models. Reduced data indicated the adequacy of the bounding theory approach to predict wave phenomena and quantitative response.
Quantitative genetic methods depending on the nature of the phenotypic trait.
de Villemereuil, Pierre
2018-01-24
A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Chen, Ying; Yuan, Jianghong; Zhang, Yingchao; Huang, Yonggang; Feng, Xue
2017-10-01
The interfacial failure of integrated circuit (IC) chips integrated on flexible substrates under bending deformation has been studied theoretically and experimentally. A compressive buckling test is used to impose the bending deformation onto the interface between the IC chip and the flexible substrate quantitatively, after which the failed interface is investigated using scanning electron microscopy. A theoretical model is established based on the beam theory and a bi-layer interface model, from which an analytical expression of the critical curvature in relation to the interfacial failure is obtained. The relationships between the critical curvature, the material, and the geometric parameters of the device are discussed in detail, providing guidance for future optimization flexible circuits based on IC chips.
NASA Astrophysics Data System (ADS)
Sader, John E.; Uchihashi, Takayuki; Higgins, Michael J.; Farrell, Alan; Nakayama, Yoshikazu; Jarvis, Suzanne P.
2005-03-01
Use of the atomic force microscope (AFM) in quantitative force measurements inherently requires a theoretical framework enabling conversion of the observed deflection properties of the cantilever to an interaction force. In this paper, the theoretical foundations of using frequency modulation atomic force microscopy (FM-AFM) in quantitative force measurements are examined and rigorously elucidated, with consideration being given to both 'conservative' and 'dissipative' interactions. This includes a detailed discussion of the underlying assumptions involved in such quantitative force measurements, the presentation of globally valid explicit formulae for evaluation of so-called 'conservative' and 'dissipative' forces, discussion of the origin of these forces, and analysis of the applicability of FM-AFM to quantitative force measurements in liquid.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information. PMID:23346354
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Testing adaptive toolbox models: a Bayesian hierarchical approach.
Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan
2013-01-01
Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.
NASA Technical Reports Server (NTRS)
Noever, David A.
1990-01-01
With and without bioconvective pattern formation, a theoretical model predicts growth in light-limited cultures of motile algae. At the critical density for pattern formation, the resulting doubly exponential population curves show an inflection. Such growth corresponds quantitatively to experiments in mechanically unstirred cultures. This attaches survival value to synchronized pattern formation.
Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao
2018-01-01
Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686
Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao
2018-01-11
Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.
Input-output characterization of fiber reinforced composites by P waves
NASA Technical Reports Server (NTRS)
Renneisen, John D.; Williams, James H., Jr.
1990-01-01
Input-output characterization of fiber composites is studied theoretically by tracing P waves in the media. A new path motion to aid in the tracing of P and the reflection generated SV wave paths in the continuum plate is developed. A theoretical output voltage from the receiving transducer is calculated for a tone burst. The study enhances the quantitative and qualitative understanding of the nondestructive evaluation of fiber composites which can be modeled as transversely isotropic media.
Mechanism of unpinning spirals by a series of stimuli
NASA Astrophysics Data System (ADS)
Gao, Xiang; Zhang, Hong
2014-06-01
Antitachycardia pacing (ATP) is widely used to terminate tachycardia before it proceeds to lethal fibrillation. The important prerequisite for successful ATP is unpinning of the spirals anchored to the obstacle by a series of stimuli. Here, to understand the mechanism of unpinning spirals by ATP, we propose a theoretical explanation based on a nonlinear eikonal relation and a kinematical model. The theoretical results are quantitatively consistent with the numerical simulations in both weak and high excitabilities.
Watkins, Herschel M.; Vallée-Bélisle, Alexis; Ricci, Francesco; Makarov, Dmitrii E.; Plaxco, Kevin W.
2012-01-01
Surface-tethered biomolecules play key roles in many biological processes and biotechnologies. However, while the physical consequences of such surface attachment have seen significant theoretical study, to date this issue has seen relatively little experimental investigation. In response we present here a quantitative experimental and theoretical study of the extent to which attachment to a charged –but otherwise apparently inert– surface alters the folding free energy of a simple biomolecule. Specifically, we have measured the folding free energy of a DNA stem loop both in solution and when site-specifically attached to a negatively charged, hydroxyl-alkane-coated gold surface. We find that, whereas surface attachment is destabilizing at low ionic strength it becomes stabilizing at ionic strengths above ~130 mM. This behavior presumably reflects two competing mechanisms: excluded volume effects, which stabilize the folded conformation by reducing the entropy of the unfolded state, and electrostatics, which, at lower ionic strengths, destabilizes the more compact folded state via repulsion from the negatively charged surface. To test this hypothesis we have employed existing theories of the electrostatics of surface-bound polyelectrolytes and the entropy of surface-bound polymers to model both effects. Despite lacking any fitted parameters, these theoretical models quantitatively fit our experimental results, suggesting that, for this system, current knowledge of both surface electrostatics and excluded volume effects is reasonably complete and accurate. PMID:22239220
ERIC Educational Resources Information Center
Arslan Buyruk, Arzu; Ogan Bekiroglu, Feral
2018-01-01
The focus of this study was to evaluate the impact of model-based inquiry on pre-service physics teachers' conceptual understanding of dynamics. Theoretical framework of this research was based on models-of-data theory. True-experimental design using quantitative and qualitative research methods was carried out for this research. Participants of…
Ionosphere-magnetosphere coupling and convection
NASA Technical Reports Server (NTRS)
Wolf, R. A.; Spiro, R. W.
1984-01-01
The following international Magnetospheric Study quantitative models of observed ionosphere-magnetosphere events are reviewed: (1) a theoretical model of convection; (2) algorithms for deducing ionospheric current and electric-field patterns from sets of ground magnetograms and ionospheric conductivity information; and (3) empirical models of ionospheric conductances and polar cap potential drop. Research into magnetic-field-aligned electric fields is reviewed, particularly magnetic-mirror effects and double layers.
Doing Research on Education for Sustainable Development
ERIC Educational Resources Information Center
Reunamo, Jyrki; Pipere, Anita
2011-01-01
Purpose: The purpose of this paper is to describe the research preferences and differences of education for sustainable development (ESD) researchers. A model with the continuums assimilation-accommodation and adaptation-agency was applied resulting in quantitative, qualitative, theoretic and participative research orientations.…
The Definition, Rationale, and Effects of Thresholding in OCT Angiography.
Cole, Emily D; Moult, Eric M; Dang, Sabin; Choi, WooJhon; Ploner, Stefan B; Lee, ByungKun; Louzada, Ricardo; Novais, Eduardo; Schottenhamml, Julia; Husvogt, Lennart; Maier, Andreas; Fujimoto, James G; Waheed, Nadia K; Duker, Jay S
2017-01-01
To examine the definition, rationale, and effects of thresholding in OCT angiography (OCTA). A theoretical description of OCTA thresholding in combination with qualitative and quantitative analysis of the effects of OCTA thresholding in eyes from a retrospective case series. Four eyes were qualitatively examined: 1 from a 27-year-old control, 1 from a 78-year-old exudative age-related macular degeneration (AMD) patient, 1 from a 58-year-old myopic patient, and 1 from a 77-year-old nonexudative AMD patient with geographic atrophy (GA). One eye from a 75-year-old nonexudative AMD patient with GA was quantitatively analyzed. A theoretical thresholding model and a qualitative and quantitative description of the dependency of OCTA on thresholding level. Due to the presence of system noise, OCTA thresholding is a necessary step in forming OCTA images; however, thresholding can complicate the relationship between blood flow and OCTA signal. Thresholding in OCTA can cause significant artifacts, which should be considered when interpreting and quantifying OCTA images.
The brainstem reticular formation is a small-world, not scale-free, network
Humphries, M.D; Gurney, K; Prescott, T.J
2005-01-01
Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219
Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana
2013-10-30
In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.
THE APPLICATION OF CYBERNETICS IN PEDAGOGY.
ERIC Educational Resources Information Center
ATUTOV, P.R.
THE APPLICATION OF CYBERNETICS TO PEDAGOGY CAN CREATE A PRECISE SCIENCE OF INSTRUCTION AND EDUCATION THROUGH THE TIME-CONSUMING BUT INEVITABLE TRANSITION FROM IDENTIFICATION OF QUALITATIVE RELATIONSHIPS AMONG PEDAGOGICAL OBJECTS TO QUANTITATIVE ANALYSIS OF THESE OBJECTS. THE THEORETICAL UTILITY OF MATHEMATICAL MODELS AND FORMULAE FOR EXPLANATORY…
Portes, Alejandro; Fernández-Kelly, Patricia; Haller, William
2013-01-01
This paper summarises a research program on the new immigrant second generation initiated in the early 1990s and completed in 2006. The four field waves of the Children of Immigrants Longitudinal Study (CILS) are described and the main theoretical models emerging from it are presented and graphically summarised. After considering critical views of this theory, we present the most recent results from this longitudinal research program in the forum of quantitative models predicting downward assimilation in early adulthood and qualitative interviews identifying ways to escape it by disadvantaged children of immigrants. Quantitative results strongly support the predicted effects of exogenous variables identified by segmented assimilation theory and identify the intervening factors during adolescence that mediate their influence on adult outcomes. Qualitative evidence gathered during the last stage of the study points to three factors that can lead to exceptional educational achievement among disadvantaged youths. All three indicate the positive influence of selective acculturation. Implications of these findings for theory and policy are discussed. PMID:23626483
Microcirculation and the physiome projects.
Bassingthwaighte, James B
2008-11-01
The Physiome projects comprise a loosely knit worldwide effort to define the Physiome through databases and theoretical models, with the goal of better understanding the integrative functions of cells, organs, and organisms. The projects involve developing and archiving models, providing centralized databases, and linking experimental information and models from many laboratories into self-consistent frameworks. Increasingly accurate and complete models that embody quantitative biological hypotheses, adhere to high standards, and are publicly available and reproducible, together with refined and curated data, will enable biological scientists to advance integrative, analytical, and predictive approaches to the study of medicine and physiology. This review discusses the rationale and history of the Physiome projects, the role of theoretical models in the development of the Physiome, and the current status of efforts in this area addressing the microcirculation.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
2017-11-01
Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.
Origin of traps and charge transport mechanism in hafnia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islamov, D. R., E-mail: damir@isp.nsc.ru; Gritsenko, V. A., E-mail: grits@isp.nsc.ru; Novosibirsk State University, Novosibirsk 630090
2014-12-01
In this study, we demonstrated experimentally and theoretically that oxygen vacancies are responsible for the charge transport in HfO{sub 2}. Basing on the model of phonon-assisted tunneling between traps, and assuming that the electron traps are oxygen vacancies, good quantitative agreement between the experimental and theoretical data of current-voltage characteristics was achieved. The thermal trap energy of 1.25 eV in HfO{sub 2} was determined based on the charge transport experiments.
NASA Technical Reports Server (NTRS)
Wilcox, W. R.; Subramanian, R. S.; Meyyappan, M.; Smith, H. D.; Mattox, D. M.; Partlow, D. P.
1981-01-01
Thermal fining, thermal migration of bubbles under reduced gravity conditions, and data to verify current theoretical models of bubble location and temperatures as a function of time are discussed. A sample, sodium borate glass, was tested during 5 to 6 minutes of zero gravity during rocket flight. The test cell contained a heater strip; thermocouples were in the sample. At present quantitative data are insufficient to confirm results of theoretical calculations.
NASA Astrophysics Data System (ADS)
Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan
2018-06-01
Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.
Rationalizing the light-induced phase separation of mixed halide organic-inorganic perovskites.
Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon; Brennan, Michael C; Morozov, Yurii V; Manser, Joseph S; Kamat, Prashant V; Schneider, William F; Kuno, Masaru
2017-08-04
Mixed halide hybrid perovskites, CH 3 NH 3 Pb(I 1-x Br x ) 3 , represent good candidates for low-cost, high efficiency photovoltaic, and light-emitting devices. Their band gaps can be tuned from 1.6 to 2.3 eV, by changing the halide anion identity. Unfortunately, mixed halide perovskites undergo phase separation under illumination. This leads to iodide- and bromide-rich domains along with corresponding changes to the material's optical/electrical response. Here, using combined spectroscopic measurements and theoretical modeling, we quantitatively rationalize all microscopic processes that occur during phase separation. Our model suggests that the driving force behind phase separation is the bandgap reduction of iodide-rich phases. It additionally explains observed non-linear intensity dependencies, as well as self-limited growth of iodide-rich domains. Most importantly, our model reveals that mixed halide perovskites can be stabilized against phase separation by deliberately engineering carrier diffusion lengths and injected carrier densities.Mixed halide hybrid perovskites possess tunable band gaps, however, under illumination they undergo phase separation. Using spectroscopic measurements and theoretical modelling, Draguta and Sharia et al. quantitatively rationalize the microscopic processes that occur during phase separation.
van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie
2017-09-01
This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
FDTD-based quantitative analysis of terahertz wave detection for multilayered structures.
Tu, Wanli; Zhong, Shuncong; Shen, Yaochun; Zhou, Qing; Yao, Ligang
2014-10-01
Experimental investigations have shown that terahertz pulsed imaging (TPI) is able to quantitatively characterize a range of multilayered media (e.g., biological issues, pharmaceutical tablet coatings, layered polymer composites, etc.). Advanced modeling of the interaction of terahertz radiation with a multilayered medium is required to enable the wide application of terahertz technology in a number of emerging fields, including nondestructive testing. Indeed, there have already been many theoretical analyses performed on the propagation of terahertz radiation in various multilayered media. However, to date, most of these studies used 1D or 2D models, and the dispersive nature of the dielectric layers was not considered or was simplified. In the present work, the theoretical framework of using terahertz waves for the quantitative characterization of multilayered media was established. A 3D model based on the finite difference time domain (FDTD) method is proposed. A batch of pharmaceutical tablets with a single coating layer of different coating thicknesses and different refractive indices was modeled. The reflected terahertz wave from such a sample was computed using the FDTD method, assuming that the incident terahertz wave is broadband, covering a frequency range up to 3.5 THz. The simulated results for all of the pharmaceutical-coated tablets considered were found to be in good agreement with the experimental results obtained using a commercial TPI system. In addition, we studied a three-layered medium to mimic the occurrence of defects in the sample.
Monte Carlo modeling of light-tissue interactions in narrow band imaging.
Le, Du V N; Wang, Quanzeng; Ramella-Roman, Jessica C; Pfefer, T Joshua
2013-01-01
Light-tissue interactions that influence vascular contrast enhancement in narrow band imaging (NBI) have not been the subject of extensive theoretical study. In order to elucidate relevant mechanisms in a systematic and quantitative manner we have developed and validated a Monte Carlo model of NBI and used it to study the effect of device and tissue parameters, specifically, imaging wavelength (415 versus 540 nm) and vessel diameter and depth. Simulations provided quantitative predictions of contrast-including up to 125% improvement in small, superficial vessel contrast for 415 over 540 nm. Our findings indicated that absorption rather than scattering-the mechanism often cited in prior studies-was the dominant factor behind spectral variations in vessel depth-selectivity. Narrow-band images of a tissue-simulating phantom showed good agreement in terms of trends and quantitative values. Numerical modeling represents a powerful tool for elucidating the factors that affect the performance of spectral imaging approaches such as NBI.
Theoretical model for plasmonic photothermal response of gold nanostructures solutions
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.
2018-03-01
Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.
Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.
Lion, Sébastien
2018-01-01
Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.
The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model
Schad, Daniel J.; Engbert, Ralf
2012-01-01
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading. PMID:22754295
NASA Astrophysics Data System (ADS)
Reddy, Pramod; Washiyama, Shun; Kaess, Felix; Kirste, Ronny; Mita, Seiji; Collazo, Ramon; Sitar, Zlatko
2017-12-01
A theoretical framework that provides a quantitative relationship between point defect formation energies and growth process parameters is presented. It enables systematic point defect reduction by chemical potential control in metalorganic chemical vapor deposition (MOCVD) of III-nitrides. Experimental corroboration is provided by a case study of C incorporation in GaN. The theoretical model is shown to be successful in providing quantitative predictions of CN defect incorporation in GaN as a function of growth parameters and provides valuable insights into boundary phases and other impurity chemical reactions. The metal supersaturation is found to be the primary factor in determining the chemical potential of III/N and consequently incorporation or formation of point defects which involves exchange of III or N atoms with the reservoir. The framework is general and may be extended to other defect systems in (Al)GaN. The utility of equilibrium formalism typically employed in density functional theory in predicting defect incorporation in non-equilibrium and high temperature MOCVD growth is confirmed. Furthermore, the proposed theoretical framework may be used to determine optimal growth conditions to achieve minimum compensation within any given constraints such as growth rate, crystal quality, and other practical system limitations.
NASA Astrophysics Data System (ADS)
van de Wiel, B. J. H.; Moene, A. F.; Hartogensis, O. K.; de Bruin, H. A. R.; Holtslag, A. A. M.
2003-10-01
In this paper a classification of stable boundary layer regimes is presented based on observations of near-surface turbulence during the Cooperative Atmosphere-Surface Exchange Study-1999 (CASES-99). It is found that the different nights can be divided into three subclasses: a turbulent regime, an intermittent regime, and a radiative regime, which confirms the findings of two companion papers that use a simplified theoretical model (it is noted that its simpliflied structure limits the model generality to near-surface flows). The papers predict the occurrence of stable boundary layer regimes in terms of external forcing parameters such as the (effective) pressure gradient and radiative forcing. The classification in the present work supports these predictions and shows that the predictions are robust in a qualitative sense. As such, it is, for example, shown that intermittent turbulence is most likely to occur in clear-sky conditions with a moderately weak effective pressure gradient. The quantitative features of the theoretical classification are, however, rather sensitive to (often uncertain) local parameter estimations, such as the bulk heat conductance of the vegetation layer. This sensitivity limits the current applicability of the theoretical classification in a strict quantitative sense, apart from its conceptual value.
Theoretical Analysis of an Iron Mineral-Based Magnetoreceptor Model in Birds
Solov'yov, Ilia A.; Greiner, Walter
2007-01-01
Sensing the magnetic field has been established as an essential part of navigation and orientation of various animals for many years. Only recently has the first detailed receptor concept for magnetoreception been published based on histological and physical results. The considered mechanism involves two types of iron minerals (magnetite and maghemite) that were found in subcellular compartments within sensory dendrites of the upper beak of several bird species. But so far a quantitative evaluation of the proposed receptor is missing. In this article, we develop a theoretical model to quantitatively and qualitatively describe the magnetic field effects among particles containing iron minerals. The analysis of forces acting between these subcellular compartments shows a particular dependence on the orientation of the external magnetic field. The iron minerals in the beak are found in the form of crystalline maghemite platelets and assemblies of magnetite nanoparticles. We demonstrate that the pull or push to the magnetite assemblies, which are connected to the cell membrane, may reach a value of 0.2 pN—sufficient to excite specific mechanoreceptive membrane channels in the nerve cell. The theoretical analysis of the assumed magnetoreceptor system in the avian beak skin clearly shows that it might indeed be a sensitive biological magnetometer providing an essential part of the magnetic map for navigation. PMID:17496012
Han, Jinxiang; Huang, Jinzhao
2012-03-01
In this study, based on the resonator model and exciplex model of electromagnetic radiation within the human body, mathematical model of biological order state, also referred to as syndrome in traditional Chinese medicine, was established and expressed as: "Sy = v/ 1n(6I + 1)". This model provides the theoretical foundation for experimental research addressing the order state of living system, especially the quantitative research syndrome in traditional Chinese medicine.
An Examination of Achievement Goals in Learning: A Quasi-Quantitative Approach
ERIC Educational Resources Information Center
Phan, Huy P.
2012-01-01
Introduction: The achievement goals framework has been researched and used to explain and account for individuals' learning and academic achievements. Over the past three decades, progress has been made in the conceptualizations and research development of different possible theoretical models of achievement goals. Notably, in this study, we…
USDA-ARS?s Scientific Manuscript database
Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...
2004-03-01
predicting future events ( Heizer and Render , 1999). Forecasting techniques fall into two major categories, qualitative and quantitative methods...Globemaster III.” Excerpt from website. www.globalsecurity.org/military /systems/ aircraft/c-17-history.htm. 2003. Heizer , Jay, and Barry Render ...of the past data used to make the forecast ( Heizer , et. al., 1999). Explanatory forecasting models assume that the variable being forecasted
Size Dependent Mechanical Properties of Monolayer Densely Arranged Polystyrene Nanospheres.
Huang, Peng; Zhang, Lijing; Yan, Qingfeng; Guo, Dan; Xie, Guoxin
2016-12-13
In contrast to macroscopic materials, the mechanical properties of polymer nanospheres show fascinating scientific and application values. However, the experimental measurements of individual nanospheres and quantitative analysis of theoretical mechanisms remain less well performed and understood. We provide a highly efficient and accurate method with monolayer densely arranged honeycomb polystyrene (PS) nanospheres for the quantitatively mechanical characterization of individual nanospheres on the basis of atomic force microscopy (AFM) nanoindentation. The efficiency is improved by 1-2 orders, and the accuracy is also enhanced almost by half-order. The elastic modulus measured in the experiments increases with decreasing radius to the smallest nanospheres (25-35 nm in radius). A core-shell model is introduced to predict the size dependent elasticity of PS nanospheres, and the theoretical prediction agrees reasonably well with the experimental results and also shows a peak modulus value.
2017-01-01
The goal of this work is to understand adsorption-induced deformation of hierarchically structured porous silica exhibiting well-defined cylindrical mesopores. For this purpose, we performed an in situ dilatometry measurement on a calcined and sintered monolithic silica sample during the adsorption of N2 at 77 K. To analyze the experimental data, we extended the adsorption stress model to account for the anisotropy of cylindrical mesopores, i.e., we explicitly derived the adsorption stress tensor components in the axial and radial direction of the pore. For quantitative predictions of stresses and strains, we applied the theoretical framework of Derjaguin, Broekhoff, and de Boer for adsorption in mesopores and two mechanical models of silica rods with axially aligned pore channels: an idealized cylindrical tube model, which can be described analytically, and an ordered hexagonal array of cylindrical mesopores, whose mechanical response to adsorption stress was evaluated by 3D finite element calculations. The adsorption-induced strains predicted by both mechanical models are in good quantitative agreement making the cylindrical tube the preferable model for adsorption-induced strains due to its simple analytical nature. The theoretical results are compared with the in situ dilatometry data on a hierarchically structured silica monolith composed by a network of mesoporous struts of MCM-41 type morphology. Analyzing the experimental adsorption and strain data with the proposed theoretical framework, we find the adsorption-induced deformation of the monolithic sample being reasonably described by a superposition of axial and radial strains calculated on the mesopore level. The structural and mechanical parameters obtained from the model are in good agreement with expectations from independent measurements and literature, respectively. PMID:28547995
Balzer, Christian; Waag, Anna M; Gehret, Stefan; Reichenauer, Gudrun; Putz, Florian; Hüsing, Nicola; Paris, Oskar; Bernstein, Noam; Gor, Gennady Y; Neimark, Alexander V
2017-06-06
The goal of this work is to understand adsorption-induced deformation of hierarchically structured porous silica exhibiting well-defined cylindrical mesopores. For this purpose, we performed an in situ dilatometry measurement on a calcined and sintered monolithic silica sample during the adsorption of N 2 at 77 K. To analyze the experimental data, we extended the adsorption stress model to account for the anisotropy of cylindrical mesopores, i.e., we explicitly derived the adsorption stress tensor components in the axial and radial direction of the pore. For quantitative predictions of stresses and strains, we applied the theoretical framework of Derjaguin, Broekhoff, and de Boer for adsorption in mesopores and two mechanical models of silica rods with axially aligned pore channels: an idealized cylindrical tube model, which can be described analytically, and an ordered hexagonal array of cylindrical mesopores, whose mechanical response to adsorption stress was evaluated by 3D finite element calculations. The adsorption-induced strains predicted by both mechanical models are in good quantitative agreement making the cylindrical tube the preferable model for adsorption-induced strains due to its simple analytical nature. The theoretical results are compared with the in situ dilatometry data on a hierarchically structured silica monolith composed by a network of mesoporous struts of MCM-41 type morphology. Analyzing the experimental adsorption and strain data with the proposed theoretical framework, we find the adsorption-induced deformation of the monolithic sample being reasonably described by a superposition of axial and radial strains calculated on the mesopore level. The structural and mechanical parameters obtained from the model are in good agreement with expectations from independent measurements and literature, respectively.
Engaging Students In Modeling Instruction for Introductory Physics
NASA Astrophysics Data System (ADS)
Brewe, Eric
2016-05-01
Teaching introductory physics is arguably one of the most important things that a physics department does. It is the primary way that students from other science disciplines engage with physics and it is the introduction to physics for majors. Modeling instruction is an active learning strategy for introductory physics built on the premise that science proceeds through the iterative process of model construction, development, deployment, and revision. We describe the role that participating in authentic modeling has in learning and then explore how students engage in this process in the classroom. In this presentation, we provide a theoretical background on models and modeling and describe how these theoretical elements are enacted in the introductory university physics classroom. We provide both quantitative and video data to link the development of a conceptual model to the design of the learning environment and to student outcomes. This work is supported in part by DUE #1140706.
Lin, Chih-Tin; Meyhofer, Edgar; Kurabayashi, Katsuo
2010-01-01
Directional control of microtubule shuttles via microfabricated tracks is key to the development of controlled nanoscale mass transport by kinesin motor molecules. Here we develop and test a model to quantitatively predict the stochastic behavior of microtubule guiding when they mechanically collide with the sidewalls of lithographically patterned tracks. By taking into account appropriate probability distributions of microscopic states of the microtubule system, the model allows us to theoretically analyze the roles of collision conditions and kinesin surface densities in determining how the motion of microtubule shuttles is controlled. In addition, we experimentally observe the statistics of microtubule collision events and compare our theoretical prediction with experimental data to validate our model. The model will direct the design of future hybrid nanotechnology devices that integrate nanoscale transport systems powered by kinesin-driven molecular shuttles.
Temporal maps and informativeness in associative learning.
Balsam, Peter D; Gallistel, C Randy
2009-02-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla-Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information.
Temporal maps and informativeness in associative learning
Balsam, Peter D; Gallistel, C. Randy
2009-01-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla–Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information. PMID:19136158
Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage.
Xia, Runchuan; Zhou, Jianting; Zhang, Hong; Liao, Leng; Zhao, Ruiqiang; Zhang, Zeyu
2018-05-02
This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL) signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model) were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values ( B xL ( x,z ) curves) agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.
NASA Astrophysics Data System (ADS)
Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey
2018-03-01
The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.
NASA Astrophysics Data System (ADS)
Wang, Ruzhuan; Li, Xiaobo; Wang, Jing; Jia, Bi; Li, Weiguo
2018-06-01
This work shows a new rational theoretical model for quantitatively predicting fracture strength and critical flaw size of the ZrB2-ZrC composites at different temperatures, which is based on a new proposed temperature dependent fracture surface energy model and the Griffith criterion. The fracture model takes into account the combined effects of temperature and damage terms (surface flaws and internal flaws) with no any fitting parameters. The predictions of fracture strength and critical flaw size of the ZrB2-ZrC composites at high temperatures agree well with experimental data. Then using the theoretical method, the improvement and design of materials are proposed. The proposed model can be used to predict the fracture strength, find the critical flaw and study the effects of microstructures on the fracture mechanism of the ZrB2-ZrC composites at high temperatures, which thus could become a potential convenient, practical and economical technical means for predicting fracture properties and material design.
Analysis of nonlinear internal waves observed by Landsat thematic mapper
NASA Astrophysics Data System (ADS)
Artale, V.; Levi, D.; Marullo, S.; Santoleri, R.
1990-09-01
In this work we test the compatibility between the theoretical parameters of a nonlinear wave model and the quantitative information that one can deduce from satellite-derived data. The theoretical parameters are obtained by applying an inverse problem to the solution of the Cauchy problem for the Korteweg-de Vries equation. Our results are applied to the case of internal wave patterns elaborated from two different satellite sensors at the south of Messina (the thematic mapper) and at the north of Messina (the synthetic aperture radar).
A new theoretical approach to analyze complex processes in cytoskeleton proteins.
Li, Xin; Kolomeisky, Anatoly B
2014-03-20
Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... observations cannot be less than six months. Historical data sets must be updated at least every three months... quantitative aspects of the model which at a minimum must adhere to the criteria set forth in paragraph (e) of..., a description of how its own theoretical pricing model contains the minimum pricing factors set...
A new method for qualitative simulation of water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Camara, A. S.; Pinheiro, M.; Antunes, M. P.; Seixas, M. J.
1987-11-01
A new dynamic modeling methodology, SLIN (Simulação Linguistica), allowing for the analysis of systems defined by linguistic variables, is presented. SLIN applies a set of logical rules avoiding fuzzy theoretic concepts. To make the transition from qualitative to quantitative modes, logical rules are used as well. Extensions of the methodology to simulation-optimization applications and multiexpert system modeling are also discussed.
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2013-01-01
Allowing respondents to retake a cognitive ability test has shown to increase their test scores. Several theoretical models have been proposed to explain this effect, which make distinct assumptions regarding the measurement invariance of psychometric tests across test administration sessions with regard to narrower cognitive abilities and general…
ERIC Educational Resources Information Center
Bonney, Lewis Alfred
This study is concerned with the manner in which experience with concrete, quantitative, interpersonal, and verbal content influences the development of ability patterns in first grade children. The literature related to theoretical models of intellectual development indicates that abilities develop in response to experiential variables, such as…
Bridging the Gap between Theory and Practice in Educational Research: Methods at the Margins
ERIC Educational Resources Information Center
Winkle-Wagner, Rachelle, Ed.; Hunter, Cheryl A., Ed.; Ortloff, Debora Hinderliter, Ed.
2009-01-01
This book provides new ways of thinking about educational processes, using quantitative and qualitative methodologies. Concrete examples of research techniques are provided for those conducting research with marginalized populations or about marginalized ideas. This volume asserts theoretical models related to research methods and the study of…
An Investigation of Taiwan's Public Attitudes toward Science and Technology
ERIC Educational Resources Information Center
Wu, Kun-Chang; Shein, Paichi Pat; Tsai, Chun-Yen; Chou, Ching-Yang; Wu, Yuh-Yih; Liu, Chia-Ju; Chiu, Houn-Lin; Hung, Jeng-Fung; Chao, David; Huang, Tai-Chu
2012-01-01
The purpose of this quantitative study is to understand the attitudes of Taiwanese adult citizens over 18 years of age toward science and technology. A theoretical model is constructed and evaluated to identify factors that affect public attitudes. Differences in citizens' gender, age, and educational level are also examined to determine whether…
Quantitative model of diffuse speckle contrast analysis for flow measurement.
Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua
2017-07-01
Diffuse speckle contrast analysis (DSCA) is a noninvasive optical technique capable of monitoring deep tissue blood flow. However, a detailed study of the speckle contrast model for DSCA has yet to be presented. We deduced the theoretical relationship between speckle contrast and exposure time and further simplified it to a linear approximation model. The feasibility of this linear model was validated by the liquid phantoms which demonstrated that the slope of this linear approximation was able to rapidly determine the Brownian diffusion coefficient of the turbid media at multiple distances using multiexposure speckle imaging. Furthermore, we have theoretically quantified the influence of optical property on the measurements of the Brownian diffusion coefficient which was a consequence of the fact that the slope of this linear approximation was demonstrated to be equal to the inverse of correlation time of the speckle.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
Mason, Deanna M
2014-01-01
This study employed a grounded theory research design to develop a theoretical model focused on the maturation of spirituality and its influence on behavior during late adolescence. Quantitative research studies have linked spirituality with decreased health-risk behaviors and increased health-promotion behaviors during late adolescence. Qualitative, theoretical data is proposed to discover the underlying reasons this relationship exists and increase the ability to apply this knowledge to practice. Twenty-one adolescents, age 16-21 years, were e-mail interviewed and transcripts analyzed using a conceptual lens of Blumer's symbolic interactionism. From this analysis, a theoretical model emerged with the core concept, finding myself that represents 5 core process concepts. Implications of this study illustrate that late adolescents are aware of their personal spiritual maturation as well as its influence on behavior. In addition, a distinction between the generic concept of spirituality, personal spirituality, and religion emerged.
Quantitative aspects of vibratory mobilization and break-up of non-wetting fluids in porous media
NASA Astrophysics Data System (ADS)
Deng, Wen
Seismic stimulation is a promising technology aimed to mobilize the entrapped non-wetting fluids in the subsurface. The applications include enhanced oil recovery or, alternatively, facilitation of movement of immiscible/partly-miscible gases far into porous media, for example, for CO2 sequestration. This work is devoted to detailed quantitative studies of the two basic pore-scale mechanisms standing behind seismic stimulation: the mobilization of bubbles or drops entrapped in pore constrictions by capillary forces and the break-up of continuous long bubbles or drops. In typical oil-production operations, oil is produced by the natural reservoir-pressure drive during the primary stage and by artificial water flooding at the secondary stage. Capillary forces act to retain a substantial residual fraction of reservoir oil even after water flooding. The seismic stimulation is an unconventional technology that serves to overcome capillary barriers in individual pores and liberate the entrapped oil by adding an oscillatory inertial forcing to the external pressure gradient. According to our study, the effect of seismic stimulation on oil mobilization is highly dependent on the frequencies and amplitudes of the seismic waves. Generally, the lower the frequency and the larger the amplitude, more effective is the mobilization. To describe the mobilization process, we developed two theoretical hydrodynamics-based models and justified both using computational fluid dynamics (CFD). Our theoretical models have a significant advantage over CFD in that they reduce the computational time significantly, while providing correct practical guidance regarding the required field parameters of vibroseismic stimulation, such as the amplitude and frequency of the seismic field. The models also provide important insights into the basic mechanisms governing the vibration-driven two-phase flow in constricted capillaries. In a waterflooded reservoir, oil can be recovered most efficiently by forming continuous streams from isolated droplets. The longer the continuous oil phase under a certain pressure gradient, the more easily it overcomes its capillary barrier. However, surface tension between water and oil causes the typically non-wetting oil, constituting the core phase in the channels, to break up at the pore constriction into isolated beads, which inhibits further motion. The break-up thus counteracts the mobilization. We developed a theoretical model that provides an exact quantitative description of the dynamics of the oil-snap-off process. It also formulates a purely geometric criterion that controls, based on pore geometry only, whether the oil core phase stays continuous or disintegrates into droplets. Both the theoretical model and the break-criterion have been validated against CFD simulations. The work completed elucidates the basic physical mechanisms behind the enhanced oil recovery by seismic waves and vibrations. This creates a theoretical foundation for the further development of corresponding field technologies.
Assessing Psychodynamic Conflict.
Simmonds, Joshua; Constantinides, Prometheas; Perry, J Christopher; Drapeau, Martin; Sheptycki, Amanda R
2015-09-01
Psychodynamic psychotherapies suggest that symptomatic relief is provided, in part, with the resolution of psychic conflicts. Clinical researchers have used innovative methods to investigate such phenomenon. This article aims to review the literature on quantitative psychodynamic conflict rating scales. An electronic search of the literature was conducted to retrieve quantitative observer-rated scales used to assess conflict noting each measure's theoretical model, information source, and training and clinical experience required. Scales were also examined for levels of reliability and validity. Five quantitative observer-rated conflict scales were identified. Reliability varied from poor to excellent with each measure demonstrating good validity. However a small number of studies and limited links to current conflict theory suggest further clinical research is needed.
NASA Astrophysics Data System (ADS)
Sun, Qiming; Melnikov, Alexander; Wang, Jing; Mandelis, Andreas
2018-04-01
A rigorous treatment of the nonlinear behavior of photocarrier radiometric (PCR) signals is presented theoretically and experimentally for the quantitative characterization of semiconductor photocarrier recombination and transport properties. A frequency-domain model based on the carrier rate equation and the classical carrier radiative recombination theory was developed. The derived concise expression reveals different functionalities of the PCR amplitude and phase channels: the phase bears direct quantitative correlation with the carrier effective lifetime, while the amplitude versus the estimated photocarrier density dependence can be used to extract the equilibrium majority carrier density and thus, resistivity. An experimental ‘ripple’ optical excitation mode (small modulation depth compared to the dc level) was introduced to bypass the complicated ‘modulated lifetime’ problem so as to simplify theoretical interpretation and guarantee measurement self-consistency and reliability. Two Si wafers with known resistivity values were tested to validate the method.
South, Susan C.; Hamdi, Nayla; Krueger, Robert F.
2015-01-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative gene × environment interaction (G×E) models not only have the potential to elucidate when genetic and environmental influences on a phenotype might differ, but why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology—diathesis-stress, bioecological, differential susceptibility, and social control. In the current manuscript, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically-informative plausible accounts of how phenotypes related to social inequality—physical health and cognition—might relate to these theoretical models. PMID:26426103
South, Susan C; Hamdi, Nayla R; Krueger, Robert F
2017-02-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative Gene × Environment interaction models have the potential to elucidate not only when genetic and environmental influences on a phenotype might differ, but also why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology-diathesis-stress, bioecological, differential susceptibility, and social control. In the current article, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically informative plausible accounts of how phenotypes related to social inequality-physical health and cognition-might relate to these theoretical models. © 2015 Wiley Periodicals, Inc.
The Matching Relation and Situation-Specific Bias Modulation in Professional Football Play Selection
Stilling, Stephanie T; Critchfield, Thomas S
2010-01-01
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models. PMID:21119855
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Neural electrical activity and neural network growth.
Gafarov, F M
2018-05-01
The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Murtonen, Mari
2015-01-01
University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…
Probing lipid membrane electrostatics
NASA Astrophysics Data System (ADS)
Yang, Yi
The electrostatic properties of lipid bilayer membranes play a significant role in many biological processes. Atomic force microscopy (AFM) is highly sensitive to membrane surface potential in electrolyte solutions. With fully characterized probe tips, AFM can perform quantitative electrostatic analysis of lipid membranes. Electrostatic interactions between Silicon nitride probes and supported zwitterionic dioleoylphosphatidylcholine (DOPC) bilayer with a variable fraction of anionic dioleoylphosphatidylserine (DOPS) were measured by AFM. Classical Gouy-Chapman theory was used to model the membrane electrostatics. The nonlinear Poisson-Boltzmann equation was numerically solved with finite element method to provide the potential distribution around the AFM tips. Theoretical tip-sample electrostatic interactions were calculated with the surface integral of both Maxwell and osmotic stress tensors on tip surface. The measured forces were interpreted with theoretical forces and the resulting surface charge densities of the membrane surfaces were in quantitative agreement with the Gouy-Chapman-Stern model of membrane charge regulation. It was demonstrated that the AFM can quantitatively detect membrane surface potential at a separation of several screening lengths, and that the AFM probe only perturbs the membrane surface potential by <2%. One important application of this technique is to estimate the dipole density of lipid membrane. Electrostatic analysis of DOPC lipid bilayers with the AFM reveals a repulsive force between the negatively charged probe tips and the zwitterionic lipid bilayers. This unexpected interaction has been analyzed quantitatively to reveal that the repulsion is due to a weak external field created by the internai membrane dipole moment. The analysis yields a dipole moment of 1.5 Debye per lipid with a dipole potential of +275 mV for supported DOPC membranes. This new ability to quantitatively measure the membrane dipole density in a noninvasive manner will be useful in identifying the biological effects of the dipole potential. Finally, heterogeneous model membranes were studied with fluid electric force microscopy (FEFM). Electrostatic mapping was demonstrated with 50 nm resolution. The capabilities of quantitative electrostatic measurement and lateral charge density mapping make AFM a unique and powerful probe of membrane electrostatics.
2014-01-01
We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264
Empirical analysis of storm-time energetic electron enhancements
NASA Astrophysics Data System (ADS)
O'Brien, Thomas Paul, III
This Ph.D. thesis documents a program for studying the appearance of energetic electrons in the Earth's outer radiation belts that is associated with many geomagnetic storms. The dynamic evolution of the electron radiation belts is an outstanding empirical problem in both theoretical space physics and its applied sibling, space weather. The project emphasizes the development of empirical tools and their use in testing several theoretical models of the energization of the electron belts. First, I develop the Statistical Asynchronous Regression technique to provide proxy electron fluxes throughout the parts of the radiation belts explored by geosynchronous and GPS spacecraft. Next, I show that a theoretical adiabatic model can relate the local time asymmetry of the proxy geosynchronous fluxes to the asymmetry of the geomagnetic field. Then, I perform a superposed epoch analysis on the proxy fluxes at local noon to identify magnetospheric and interplanetary precursors of relativistic electron enhancements. Finally, I use statistical and neural network phase space analyses to determine the hourly evolution of flux at a virtual stationary monitor. The dynamic equation quantitatively identifies the importance of different drivers of the electron belts. This project provides empirical constraints on theoretical models of electron acceleration.
Danov, Krassimir D; Georgiev, Mihail T; Kralchevsky, Peter A; Radulova, Gergana M; Gurkov, Theodor D; Stoyanov, Simeon D; Pelan, Eddie G
2018-01-01
Suspensions of colloid particles possess the remarkable property to solidify upon the addition of minimal amount of a second liquid that preferentially wets the particles. The hardening is due to the formation of capillary bridges (pendular rings), which connect the particles. Here, we review works on the mechanical properties of such suspensions and related works on the capillary-bridge force, and present new rheological data for the weakly studied concentration range 30-55 vol% particles. The mechanical strength of the solidified capillary suspensions, characterized by the yield stress Y, is measured at the elastic limit for various volume fractions of the particles and the preferentially wetting liquid. A quantitative theoretical model is developed, which relates Y with the maximum of the capillary-bridge force, projected on the shear plane. A semi-empirical expression for the mean number of capillary bridges per particle is proposed. The model agrees very well with the experimental data and gives a quantitative description of the yield stress, which increases with the rise of interfacial tension and with the volume fractions of particles and capillary bridges, but decreases with the rise of particle radius and contact angle. The quantitative description of capillary force is based on the exact theory and numerical calculation of the capillary bridge profile at various bridge volumes and contact angles. An analytical formula for Y is also derived. The comparison of the theoretical and experimental strain at the elastic limit reveals that the fluidization of the capillary suspension takes place only in a deformation zone of thickness up to several hundred particle diameters, which is adjacent to the rheometer's mobile plate. The reported experimental results refer to water-continuous suspension with hydrophobic particles and oily capillary bridges. The comparison of data for bridges from soybean oil and hexadecane surprisingly indicate that the yield strength is greater for the suspension with soybean oil despite its lower interfacial tension against water. The result can be explained with the different contact angles of the two oils in agreement with the theoretical predictions. The results could contribute for a better understanding, quantitative prediction and control of the mechanical properties of three-phase capillary suspensions solid/liquid/liquid. Copyright © 2017 Elsevier B.V. All rights reserved.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
Modeling the Effect of Nail Corrosion on the Lateral Strength of Joints
Samuel L. Zelinka; Douglas R. Rammer
2012-01-01
This article describes a theoretical method of linking fastener corrosion in wood connections to potential reduction in lateral shear strength. It builds upon published quantitative data of corrosion rates of metals in contact with treated wood for several different wood preservatives. These corrosion rates are then combined with yield theory equations to calculate a...
ERIC Educational Resources Information Center
Rubenson, Kjell; Desjardins, Richard
2009-01-01
Quantitative and qualitative findings on barriers to participation in adult education are reviewed and some of the defining parameters that may explain observed national differences are considered. A theoretical perspective based on bounded agency is put forth to take account of the interaction between structurally and individually based barriers…
ERIC Educational Resources Information Center
Lint, Anna H.
2013-01-01
This quantitative study evaluated and investigated the theoretical underpinnings of the Kember's (1995) student progress model that examines the direct or indirect effects of student persistence in online education by identifying the relationships between variables. The primary method of data collection in this study was a survey by exploring the…
Weaknesses of South African Education in the Mirror Image of International Educational Development
ERIC Educational Resources Information Center
Wolhuter, C. C.
2014-01-01
The aim of this article is to present a systematic, holistic evaluation of the South African education system, using international benchmarks as the yardstick. A theoretical model for the evaluation of a national education project is constructed. This consists of three dimensions, namely: a quantitative dimension, a qualitative dimension, and an…
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Speech motor development: Integrating muscles, movements, and linguistic units.
Smith, Anne
2006-01-01
A fundamental problem for those interested in human communication is to determine how ideas and the various units of language structure are communicated through speaking. The physiological concepts involved in the control of muscle contraction and movement are theoretically distant from the processing levels and units postulated to exist in language production models. A review of the literature on adult speakers suggests that they engage complex, parallel processes involving many units, including sentence, phrase, syllable, and phoneme levels. Infants must develop multilayered interactions among language and motor systems. This discussion describes recent studies of speech motor performance relative to varying linguistic goals during the childhood, teenage, and young adult years. Studies of the developing interactions between speech motor and language systems reveal both qualitative and quantitative differences between the developing and the mature systems. These studies provide an experimental basis for a more comprehensive theoretical account of how mappings between units of language and units of action are formed and how they function. Readers will be able to: (1) understand the theoretical differences between models of speech motor control and models of language processing, as well as the nature of the concepts used in the two different kinds of models, (2) explain the concept of coarticulation and state why this phenomenon has confounded attempts to determine the role of linguistic units, such as syllables and phonemes, in speech production, (3) describe the development of speech motor performance skills and specify quantitative and qualitative differences between speech motor performance in children and adults, and (4) describe experimental methods that allow scientists to study speech and limb motor control, as well as compare units of action used to study non-speech and speech movements.
A theoretical physicist's journey into biology: from quarks and strings to cells and whales.
West, Geoffrey B
2014-10-08
Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.
Critical Quantitative Inquiry in Context
ERIC Educational Resources Information Center
Stage, Frances K.; Wells, Ryan S.
2014-01-01
This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…
Rheological properties of simulated debris flows in the laboratory environment
Ling, Chi-Hai; Chen, Cheng-lung; Jan, Chyan-Deng; ,
1990-01-01
Steady debris flows with or without a snout are simulated in a 'conveyor-belt' flume using dry glass spheres of a uniform size, 5 or 14 mm in diameter, and their rheological properties described quantitatively in constants in a generalized viscoplastic fluid (GVF) model. Close agreement of the measured velocity profiles with the theoretical ones obtained from the GVF model strongly supports the validity of a GVF model based on the continuum-mechanics approach. Further comparisons of the measured and theoretical velocity profiles along with empirical relations among the shear stress, the normal stress, and the shear rate developed from the 'ring-shear' apparatus determine the values of the rheological parameters in the GVF model, namely the flow-behavior index, the consistency index, and the cross-consistency index. Critical issues in the evaluation of such rheological parameters using the conveyor-belt flume and the ring-shear apparatus are thus addressed in this study.
Ryan, Gillian L; Watanabe, Naoki; Vavylonis, Dimitrios
2012-04-01
A characteristic feature of motile cells as they undergo a change in motile behavior is the development of fluctuating exploratory motions of the leading edge, driven by actin polymerization. We review quantitative models of these protrusion and retraction phenomena. Theoretical studies have been motivated by advances in experimental and computational methods that allow controlled perturbations, single molecule imaging, and analysis of spatiotemporal correlations in microscopic images. To explain oscillations and waves of the leading edge, most theoretical models propose nonlinear interactions and feedback mechanisms among different components of the actin cytoskeleton system. These mechanisms include curvature-sensing membrane proteins, myosin contraction, and autocatalytic biochemical reaction kinetics. We discuss how the combination of experimental studies with modeling promises to quantify the relative importance of these biochemical and biophysical processes at the leading edge and to evaluate their generality across cell types and extracellular environments. Copyright © 2012 Wiley Periodicals, Inc.
Li, Lin; Xu, Shuo; An, Xin; Zhang, Lu-Da
2011-10-01
In near infrared spectral quantitative analysis, the precision of measured samples' chemical values is the theoretical limit of those of quantitative analysis with mathematical models. However, the number of samples that can obtain accurately their chemical values is few. Many models exclude the amount of samples without chemical values, and consider only these samples with chemical values when modeling sample compositions' contents. To address this problem, a semi-supervised LS-SVR (S2 LS-SVR) model is proposed on the basis of LS-SVR, which can utilize samples without chemical values as well as those with chemical values. Similar to the LS-SVR, to train this model is equivalent to solving a linear system. Finally, the samples of flue-cured tobacco were taken as experimental material, and corresponding quantitative analysis models were constructed for four sample compositions' content(total sugar, reducing sugar, total nitrogen and nicotine) with PLS regression, LS-SVR and S2 LS-SVR. For the S2 LS-SVR model, the average relative errors between actual values and predicted ones for the four sample compositions' contents are 6.62%, 7.56%, 6.11% and 8.20%, respectively, and the correlation coefficients are 0.974 1, 0.973 3, 0.923 0 and 0.948 6, respectively. Experimental results show the S2 LS-SVR model outperforms the other two, which verifies the feasibility and efficiency of the S2 LS-SVR model.
Bayesian accounts of covert selective attention: A tutorial review.
Vincent, Benjamin T
2015-05-01
Decision making and optimal observer models offer an important theoretical approach to the study of covert selective attention. While their probabilistic formulation allows quantitative comparison to human performance, the models can be complex and their insights are not always immediately apparent. Part 1 establishes the theoretical appeal of the Bayesian approach, and introduces the way in which probabilistic approaches can be applied to covert search paradigms. Part 2 presents novel formulations of Bayesian models of 4 important covert attention paradigms, illustrating optimal observer predictions over a range of experimental manipulations. Graphical model notation is used to present models in an accessible way and Supplementary Code is provided to help bridge the gap between model theory and practical implementation. Part 3 reviews a large body of empirical and modelling evidence showing that many experimental phenomena in the domain of covert selective attention are a set of by-products. These effects emerge as the result of observers conducting Bayesian inference with noisy sensory observations, prior expectations, and knowledge of the generative structure of the stimulus environment.
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Hocking, Matthew C.; Hobbie, Wendy L.; Deatrick, Janet A.; Lucas, Matthew S.; Szabo, Margo M.; Volpe, Ellen M.; Barakat, Lamia P.
2012-01-01
Many childhood brain tumor survivors experience significant neurocognitive late effects across multiple domains that negatively affect quality of life. A theoretical model of survivorship suggests that family functioning and survivor neurocognitive functioning interact to affect survivor and family outcomes. This paper reviews the types of neurocognitive late effects experienced by survivors of pediatric brain tumors. Quantitative and qualitative data from three case reports of young adult survivors and their mothers are analyzed according to the theoretical model and presented in this paper to illustrate the importance of key factors presented in the model. The influence of age at brain tumor diagnosis, family functioning, and family adaptation to illness on survivor quality of life and family outcomes are highlighted. Future directions for research and clinical care for this vulnerable group of survivors are discussed. PMID:21722062
Gucciardi, Daniel F; Jackson, Ben
2015-01-01
Fostering individuals' long-term participation in activities that promote positive development such as organised sport is an important agenda for research and practice. We integrated the theories of planned behaviour (TPB) and basic psychological needs (BPN) to identify factors associated with young adults' continuation in organised sport over a 12-month period. Prospective study, including an online psycho-social assessment at Time 1 and an assessment of continuation in sport approximately 12 months later. Participants (N=292) aged between 17 and 21 years (M=18.03; SD=1.29) completed an online survey assessing the theories of planned behaviour and basic psychological needs constructs. Bayesian structural equation modelling (BSEM) was employed to test the hypothesised theoretical sequence, using informative priors for structural relations based on empirical and theoretical expectations. The analyses revealed support for the robustness of the hypothesised theoretical model in terms of the pattern of relations as well as the direction and strength of associations among the constructs derived from quantitative summaries of existing research and theoretical expectations. The satisfaction of basic psychological needs was associated with more positive attitudes, higher levels of perceived behavioural control, and more favourable subjective norms; positive attitudes and perceived behavioural control were associated with higher behavioural intentions; and both intentions and perceived behavioural control predicted sport continuation. This study demonstrated the utility of Bayesian structural equation modelling for testing the robustness of an integrated theoretical model, which is informed by empirical evidence from meta-analyses and theoretical expectations, for understanding sport continuation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Optical Basicity and Nepheline Crystallization in High Alumina Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Carmen P.; McCloy, John S.; Schweiger, M. J.
2011-02-25
The purpose of this study was to find compositions that increase waste loading of high-alumina wastes beyond what is currently acceptable while avoiding crystallization of nepheline (NaAlSiO4) on slow cooling. Nepheline crystallization has been shown to have a large impact on the chemical durability of high-level waste glasses. It was hypothesized that there would be some composition regions where high-alumina would not result in nepheline crystal production, compositions not currently allowed by the nepheline discriminator. Optical basicity (OB) and the nepheline discriminator (ND) are two ways of describing a given complex glass composition. This report presents the theoretical and experimentalmore » basis for these models. They are being studied together in a quadrant system as metrics to explore nepheline crystallization and chemical durability as a function of waste glass composition. These metrics were calculated for glasses with existing data and also for theoretical glasses to explore nepheline formation in Quadrant IV (passes OB metric but fails ND metric), where glasses are presumed to have good chemical durability. Several of these compositions were chosen, and glasses were made to fill poorly represented regions in Quadrant IV. To evaluate nepheline formation and chemical durability of these glasses, quantitative X-ray diffraction (XRD) analysis and the Product Consistency Test were conducted. A large amount of quantitative XRD data is collected here, both from new glasses and from glasses of previous studies that had not previously performed quantitative XRD on the phase assemblage. Appendix A critically discusses a large dataset to be considered for future quantitative studies on nepheline formation in glass. Appendix B provides a theoretical justification for choice of the oxide coefficients used to compute the OB criterion for nepheline formation.« less
Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...
2015-10-06
Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less
NASA Astrophysics Data System (ADS)
Moussaid, A.; Schosseler, F.; Munch, J. P.; Candau, S. J.
1993-04-01
The intensity scattered from polyacrylic acid and polymethacrylic acid solutions has been measured by small angle neutron scattering experiemnts. The influence of polymer concentration, ionization degree, temperature and salt content has been investigated. Results are in qualitative agreement with a model which predicts the existence of microphases in the unstable region of the phase diagram. Quantitative comparison with the theory is performed by fitting the theoretical structure factor to the experimental data. For a narrow range of ionizaiton degrees nearly quantitative agreement with the theory is found for the polyacrylic acide system.
Quantitating Antibody Uptake In Vivo: Conditional Dependence on Antigen Expression Levels
Thurber, Greg M.; Weissleder, Ralph
2010-01-01
Purpose Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Procedures Using a cell line with high EpCAM expression and moderate EGFR expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high affinity antibodies. Results As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. Conclusions These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes. PMID:20809210
Quantitating antibody uptake in vivo: conditional dependence on antigen expression levels.
Thurber, Greg M; Weissleder, Ralph
2011-08-01
Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Using a cell line with high epithelial cell adhesion molecule expression and moderate epidermal growth factor receptor expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high-affinity antibodies. As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes.
The neural optimal control hierarchy for motor control
NASA Astrophysics Data System (ADS)
DeWolf, T.; Eliasmith, C.
2011-10-01
Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.
A Method for Quantifying, Visualising, and Analysing Gastropod Shell Form
Liew, Thor-Seng; Schilthuizen, Menno
2016-01-01
Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology. PMID:27280463
Quantitative dual-probe microdialysis: mathematical model and analysis.
Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles
2002-04-01
Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.
Structural relaxation in supercooled orthoterphenyl.
Chong, S-H; Sciortino, F
2004-05-01
We report molecular-dynamics simulation results performed for a model of molecular liquid orthoterphenyl in supercooled states, which we then compare with both experimental data and mode-coupling-theory (MCT) predictions, aiming at a better understanding of structural relaxation in orthoterphenyl. We pay special attention to the wave number dependence of the collective dynamics. It is shown that the simulation results for the model share many features with experimental data for real system, and that MCT captures the simulation results at the semiquantitative level except for intermediate wave numbers connected to the overall size of the molecule. Theoretical results at the intermediate wave number region are found to be improved by taking into account the spatial correlation of the molecule's geometrical center. This supports the idea that unusual dynamical properties at the intermediate wave numbers, reported previously in simulation studies for the model and discernible in coherent neutron-scattering experimental data, are basically due to the coupling of the rotational motion to the geometrical-center dynamics. However, there still remain qualitative as well as quantitative discrepancies between theoretical prediction and corresponding simulation results at the intermediate wave numbers, which call for further theoretical investigation.
ERIC Educational Resources Information Center
Graham, Karen
2012-01-01
This study attempted development and validation of a measure of "intention to stay in academia" for physician assistant (PA) faculty in order to determine if the construct could be measured in way that had both quantitative and qualitative meaning. Adopting both the methodologic framework of the Rasch model and the theoretical framework…
Theoretical Framework for Interaction Game Design
2016-05-19
modeling. We take a data-driven quantitative approach to understand conversational behaviors by measuring conversational behaviors using advanced sensing...current state of the art, human computing is considered to be a reasonable approach to break through the current limitation. To solicit high quality and...proper resources in conversation to enable smooth and effective interaction. The last technique is about conversation measurement , analysis, and
Oxidative dissolution of silver nanoparticles: A new theoretical approach.
Adamczyk, Zbigniew; Oćwieja, Magdalena; Mrowiec, Halina; Walas, Stanisław; Lupa, Dawid
2016-05-01
A general model of an oxidative dissolution of silver particle suspensions was developed that rigorously considers the bulk and surface solute transport. A two-step surface reaction scheme was proposed that comprises the formation of the silver oxide phase by direct oxidation and the acidic dissolution of this phase leading to silver ion release. By considering this, a complete set of equations is formulated describing oxygen and silver ion transport to and from particles' surfaces. These equations are solved in some limiting cases of nanoparticle dissolution in dilute suspensions. The obtained kinetic equations were used for the interpretation of experimental data pertinent to the dissolution kinetics of citrate-stabilized silver nanoparticles. In these kinetic measurements the role of pH and bulk suspension concentration was quantitatively evaluated by using the atomic absorption spectrometry (AAS). It was shown that the theoretical model adequately reflects the main features of the experimental results, especially the significant increase in the dissolution rate for lower pH. Also the presence of two kinetic regimes was quantitatively explained in terms of the decrease in the coverage of the fast dissolving oxide layer. The overall silver dissolution rate constants characterizing these two regimes were determined. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.
1988-01-01
A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.
Wang, Xu; Zhang, Xuejun
2009-02-10
This paper is based on a microinteraction principle of fabricating a RB-SiC material with a fixed abrasive. The influence of the depth formed on a RB-SiC workpiece by a diamond abrasive on the material removal rate and the surface roughness of an optical component are quantitatively discussed. A mathematical model of the material removal rate and the simulation results of the surface roughness are achieved. In spite of some small difference between the experimental results and the theoretical anticipation, which is predictable, the actual removal rate matches the theoretical prediction very well. The fixed abrasive technology's characteristic of easy prediction is of great significance in the optical fabrication industry, so this brand-new fixed abrasive technology has wide application possibilities.
NASA Astrophysics Data System (ADS)
Samadi, R.; Belkacem, K.; Ludwig, H.-G.; Caffau, E.; Campante, T. L.; Davies, G. R.; Kallinger, T.; Lund, M. N.; Mosser, B.; Baglin, A.; Mathur, S.; Garcia, R. A.
2013-11-01
Context. A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (τeff) and the root-mean-square (rms) brightness fluctuations (σ) associated with the granulation scale as a function of the peak frequency (νmax) of the solar-like oscillations. Aims: We aim at providing a theoretical background to the observed scaling relations based on a model developed in Paper I. Methods: We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model published in Paper I. For each PDS we derived the associated characteristic time (τeff) and the rms brightness fluctuations (σ) and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the measurements made on a large set of Kepler targets. Results: We derive theoretical scaling relations for τeff and σ, which show the same dependence on νmax as the observed scaling relations. In addition, we show that these quantities also scale as a function of the turbulent Mach number (ℳa) estimated at the photosphere. The theoretical scaling relations for τeff and σ match the observations well on a global scale. Quantitatively, the remaining discrepancies with the observations are found to be much smaller than previous theoretical calculations made for red giants. Conclusions: Our modelling provides additional theoretical support for the observed variations of σ and τeff with νmax. It also highlights the important role of ℳa in controlling the properties of the stellar granulation. However, the observations made with Kepler on a wide variety of stars cannot confirm the dependence of our scaling relations on ℳa. Measurements of the granulation background and detections of solar-like oscillations in a statistically sufficient number of cool dwarf stars will be required for confirming the dependence of the theoretical scaling relations with ℳa. Appendices are available in electronic form at http://www.aanda.org
Limit of a nonpreferential attachment multitype network model
NASA Astrophysics Data System (ADS)
Shang, Yilun
2017-02-01
Here, we deal with a model of multitype network with nonpreferential attachment growth. The connection between two nodes depends asymmetrically on their types, reflecting the implication of time order in temporal networks. Based upon graph limit theory, we analytically determined the limit of the network model characterized by a kernel, in the sense that the number of copies of any fixed subgraph converges when network size tends to infinity. The results are confirmed by extensive simulations. Our work thus provides a theoretical framework for quantitatively understanding grown temporal complex networks as a whole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less
Additive Genetic Variability and the Bayesian Alphabet
Gianola, Daniel; de los Campos, Gustavo; Hill, William G.; Manfredi, Eduardo; Fernando, Rohan
2009-01-01
The use of all available molecular markers in statistical models for prediction of quantitative traits has led to what could be termed a genomic-assisted selection paradigm in animal and plant breeding. This article provides a critical review of some theoretical and statistical concepts in the context of genomic-assisted genetic evaluation of animals and crops. First, relationships between the (Bayesian) variance of marker effects in some regression models and additive genetic variance are examined under standard assumptions. Second, the connection between marker genotypes and resemblance between relatives is explored, and linkages between a marker-based model and the infinitesimal model are reviewed. Third, issues associated with the use of Bayesian models for marker-assisted selection, with a focus on the role of the priors, are examined from a theoretical angle. The sensitivity of a Bayesian specification that has been proposed (called “Bayes A”) with respect to priors is illustrated with a simulation. Methods that can solve potential shortcomings of some of these Bayesian regression procedures are discussed briefly. PMID:19620397
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-30
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
NASA Astrophysics Data System (ADS)
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-01
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
Indirect scaling methods for testing quantitative emotion theories.
Junge, Martin; Reisenzein, Rainer
2013-01-01
Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.
Quantitative genetic models of sexual conflict based on interacting phenotypes.
Moore, Allen J; Pizzari, Tommaso
2005-05-01
Evolutionary conflict arises between reproductive partners when alternative reproductive opportunities are available. Sexual conflict can generate sexually antagonistic selection, which mediates sexual selection and intersexual coevolution. However, despite intense interest, the evolutionary implications of sexual conflict remain unresolved. We propose a novel theoretical approach to study the evolution of sexually antagonistic phenotypes based on quantitative genetics and the measure of social selection arising from male-female interactions. We consider the phenotype of one sex as both a genetically influenced evolving trait as well as the (evolving) social environment in which the phenotype of the opposite sex evolves. Several important points emerge from our analysis, including the relationship between direct selection on one sex and indirect effects through selection on the opposite sex. We suggest that the proposed approach may be a valuable tool to complement other theoretical approaches currently used to study sexual conflict. Most importantly, our approach highlights areas where additional empirical data can help clarify the role of sexual conflict in the evolutionary process.
Stability and Hopf Bifurcation in a HIV-1 System with Multitime Delays
NASA Astrophysics Data System (ADS)
Zhao, Lingyan; Liu, Haihong; Yan, Fang
In this paper, we propose a mathematical model for HIV-1 infection with three time delays. The model examines a viral-therapy for controlling infections by using an engineered virus to selectively eliminate infected cells. In our model, three time delays represent the latent period of pathogen virus, pathogen virus production period and recombinant (genetically modified) virus production period, respectively. Detailed theoretical analysis have demonstrated that the values of three delays can affect the stability of equilibrium solutions, can also lead to Hopf bifurcation and oscillated solutions of the system. Moreover, we give the conditions for the existence of stable positive equilibrium solution and Hopf bifurcation. Further, the properties of Hopf bifurcation are discussed. These theoretical results indicate that the delays play an important role in determining the dynamic behavior quantitatively. Therefore, it is a fact that delays are very important, which should not be missed in controlling HIV-1 infections.
Stochastic Simulation of Actin Dynamics Reveals the Role of Annealing and Fragmentation
Fass, Joseph; Pak, Chi; Bamburg, James; Mogilner, Alex
2008-01-01
Recent observations of F-actin dynamics call for theoretical models to interpret and understand the quantitative data. A number of existing models rely on simplifications and do not take into account F-actin fragmentation and annealing. We use Gillespie’s algorithm for stochastic simulations of the F-actin dynamics including fragmentation and annealing. The simulations vividly illustrate that fragmentation and annealing have little influence on the shape of the polymerization curve and on nucleotide profiles within filaments but drastically affect the F-actin length distribution, making it exponential. We find that recent surprising measurements of high length diffusivity at the critical concentration cannot be explained by fragmentation and annealing events unless both fragmentation rates and frequency of undetected fragmentation and annealing events are greater than previously thought. The simulations compare well with experimentally measured actin polymerization data and lend additional support to a number of existing theoretical models. PMID:18279896
NASA Astrophysics Data System (ADS)
Li, Kun-Dar; Huang, Po-Yu
2017-12-01
In order to simulate a process of directional vapor deposition, in this study, a numerical approach was applied to model the growth and evolution of surface morphologies for the crystallographic structures of thin films. The critical factors affecting the surface morphologies in a deposition process, such as the crystallographic symmetry, anisotropic interfacial energy, shadowing effect, and deposition rate, were all enclosed in the theoretical model. By altering the parameters of crystallographic symmetry in the structures, the faceted nano-columns with rectangular and hexagonal shapes were established in the simulation results. Furthermore, for revealing the influences of the anisotropic strength and the deposition rate theoretically on the crystallographic structure formations, various parameters adjusted in the numerical calculations were also investigated. Not only the morphologies but also the surface roughnesses for different processing conditions were distinctly demonstrated with the quantitative analysis of the simulations.
NASA Astrophysics Data System (ADS)
Hudson, Richard
2017-07-01
This paper [4] - referred to below as 'LXL' - is an excellent example of cross-disciplinary work which brings together three very different disciplines, each with its different methods: quantitative computational linguistics (exploring big data), psycholinguistics (using experiments with human subjects) and theoretical linguistics (building models based on language descriptions). The measured unit is the dependency between two words, as defined by theoretical linguistics, and the question is how the length of this dependency affects the choices made by writers, as revealed in big data from a wide range of languages.
Rearrangement of valence neutrons in the neutrinoless double-β decay of 136Xe
NASA Astrophysics Data System (ADS)
Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; Entwisle, J. P.; Freeman, S. J.; Gaffney, L. P.; Guimarães, V.; Hammache, F.; McKee, P. P.; Parr, E.; Portail, C.; Schiffer, J. P.; de Séréville, N.; Sharp, D. K.; Smith, J. F.; Stefan, I.
2016-11-01
A quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.
Neutron multiplicity counting: Confidence intervals for reconstruction parameters
Verbeke, Jerome M.
2016-03-09
From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorooshian, S.; Bales, R.C.; Gupta, V.K.
1992-02-01
In order to better understand the implications of acid deposition in watershed systems in the Sierra Nevada, the California Air Resources Board (CARB) initiated an intensive integrated watershed study at Emerald Lake in Sequoia National Park. The comprehensive nature of the data obtained from these studies provided an opportunity to develop a quantitative description of how watershed characteristics and inputs to the watershed influence within-watershed fluxes, chemical composition of streams and lakes, and, therefore, biotic processes. Two different but closely-related modeling approaches were followed. In the first, the emphasis was placed on the development of systems-theoretic models. In the secondmore » approach, development of a compartmental model was undertaken. The systems-theoretic effort results in simple time-series models that allow the consideration of the stochastic properties of model errors. The compartmental model (the University of Arizona Alpine Hydrochemical Model (AHM)) is a comprehensive and detailed description of the various interacting physical and chemical processes occurring on the watershed.« less
NASA Astrophysics Data System (ADS)
Hunt, Allen G.; Sahimi, Muhammad
2017-12-01
We describe the most important developments in the application of three theoretical tools to modeling of the morphology of porous media and flow and transport processes in them. One tool is percolation theory. Although it was over 40 years ago that the possibility of using percolation theory to describe flow and transport processes in porous media was first raised, new models and concepts, as well as new variants of the original percolation model are still being developed for various applications to flow phenomena in porous media. The other two approaches, closely related to percolation theory, are the critical-path analysis, which is applicable when porous media are highly heterogeneous, and the effective medium approximation—poor man's percolation—that provide a simple and, under certain conditions, quantitatively correct description of transport in porous media in which percolation-type disorder is relevant. Applications to topics in geosciences include predictions of the hydraulic conductivity and air permeability, solute and gas diffusion that are particularly important in ecohydrological applications and land-surface interactions, and multiphase flow in porous media, as well as non-Gaussian solute transport, and flow morphologies associated with imbibition into unsaturated fractures. We describe new applications of percolation theory of solute transport to chemical weathering and soil formation, geomorphology, and elemental cycling through the terrestrial Earth surface. Wherever quantitatively accurate predictions of such quantities are relevant, so are the techniques presented here. Whenever possible, the theoretical predictions are compared with the relevant experimental data. In practically all the cases, the agreement between the theoretical predictions and the data is excellent. Also discussed are possible future directions in the application of such concepts to many other phenomena in geosciences.
Phenotypic switching in bacteria
NASA Astrophysics Data System (ADS)
Merrin, Jack
Living matter is a non-equilibrium system in which many components work in parallel to perpetuate themselves through a fluctuating environment. Physiological states or functionalities revealed by a particular environment are called phenotypes. Transitions between phenotypes may occur either spontaneously or via interaction with the environment. Even in the same environment, genetically identical bacteria can exhibit different phenotypes of a continuous or discrete nature. In this thesis, we pursued three lines of investigation into discrete phenotypic heterogeneity in bacterial populations: the quantitative characterization of the so-called bacterial persistence, a theoretical model of phenotypic switching based on those measurements, and the design of artificial genetic networks which implement this model. Persistence is the phenotype of a subpopulation of bacteria with a reduced sensitivity to antibiotics. We developed a microfluidic apparatus, which allowed us to monitor the growth rates of individual cells while applying repeated cycles of antibiotic treatments. We were able to identify distinct phenotypes (normal and persistent) and characterize the stochastic transitions between them. We also found that phenotypic heterogeneity was present prior to any environmental cue such as antibiotic exposure. Motivated by the experiments with persisters, we formulated a theoretical model describing the dynamic behavior of several discrete phenotypes in a periodically varying environment. This theoretical framework allowed us to quantitatively predict the fitness of dynamic populations and to compare survival strategies according to environmental time-symmetries. These calculations suggested that persistence is a strategy used by bacterial populations to adapt to fluctuating environments. Knowledge of the phenotypic transition rates for persistence may provide statistical information about the typical environments of bacteria. We also describe a design of artificial genetic networks that would implement a more general theoretical model of phenotypic switching. We will use a new cloning strategy in order to systematically assemble a large number of genetic features, such as site-specific recombination components from the R64 plasmid, which invert several coexisting DNA segments. The inversion of these segments would lead to discrete phenotypic transitions inside a living cell. These artificial phenotypic switches can be controlled precisely in experiments and may serve as a benchmark for their natural counterparts.
NASA Technical Reports Server (NTRS)
Dragan, O.; Galan, N.; Sirbu, A.; Ghita, C.
1974-01-01
The design and construction of inductive transducers for measuring the vibrations in metal bars at ultrasonic frequencies are discussed. Illustrations of the inductive transducers are provided. The quantitative relations that are useful in designing the transducers are analyzed. Mathematical models are developed to substantiate the theoretical considerations. Results obtained with laboratory equipment in testing specified metal samples are included.
Development and Use of Numerical and Factual Data Bases
1983-10-01
the quantitative description of what has been accomplished by their scientific and technical endeavors. 1-3 overhead charge to the national treasury... Molecular properties calculated with the aid of quantum mechanics or the prediction of solar eclipses using celestial mechanics are examples of theoretical...system under study. Examples include phase diagrams, molecular models, geological maps, metabolic pathways. Symbolic data (F3) are data presented in
2018-02-15
models and approaches are also valid using other invasive and non - invasive technologies. Finally, we illustrate and experimentally evaluate this...2017 Project Outline q Pattern formation diversity in wild microbial societies q Experimental and mathematical analysis methodology q Skeleton...chemotaxis, nutrient degradation, and the exchange of amino acids between cells. Using both quantitative experimental methods and several theoretical
ERIC Educational Resources Information Center
van Niekerk, Eldridge; Muller, Hélène
2017-01-01
This article reports on the perceptions of school staff of professional development and empowerment as part of the long-term leadership task of principals. The long-term leadership model was used as a theoretical framework to quantitatively determine the perceptions of 118 teachers and education managers in approximately 100 schools throughout…
Cultural evolutionary theory: How culture evolves and why it matters
Creanza, Nicole; Kolodny, Oren; Feldman, Marcus W.
2017-01-01
Human cultural traits—behaviors, ideas, and technologies that can be learned from other individuals—can exhibit complex patterns of transmission and evolution, and researchers have developed theoretical models, both verbal and mathematical, to facilitate our understanding of these patterns. Many of the first quantitative models of cultural evolution were modified from existing concepts in theoretical population genetics because cultural evolution has many parallels with, as well as clear differences from, genetic evolution. Furthermore, cultural and genetic evolution can interact with one another and influence both transmission and selection. This interaction requires theoretical treatments of gene–culture coevolution and dual inheritance, in addition to purely cultural evolution. In addition, cultural evolutionary theory is a natural component of studies in demography, human ecology, and many other disciplines. Here, we review the core concepts in cultural evolutionary theory as they pertain to the extension of biology through culture, focusing on cultural evolutionary applications in population genetics, ecology, and demography. For each of these disciplines, we review the theoretical literature and highlight relevant empirical studies. We also discuss the societal implications of the study of cultural evolution and of the interactions of humans with one another and with their environment. PMID:28739941
Hanbury, Andria; Thompson, Carl; Mannion, Russell
2011-07-01
Tailored implementation strategies targeting health professionals' adoption of evidence-based recommendations are currently being developed. Research has focused on how to select an appropriate theoretical base, how to use that theoretical base to explore the local context, and how to translate theoretical constructs associated with the key factors found to influence innovation adoption into feasible and tailored implementation strategies. The reasons why an intervention is thought not to have worked are often cited as being: inappropriate choice of theoretical base; unsystematic development of the implementation strategies; and a poor evidence base to guide the process. One area of implementation research that is commonly overlooked is how to synthesize the data collected in a local context in order to identify what factors to target with the implementation strategies. This is suggested to be a critical process in the development of a theory-based intervention. The potential of multilevel modelling techniques to synthesize data collected at different hierarchical levels, for example, individual attitudes and team level variables, is discussed. Future research is needed to explore further the potential of multilevel modelling for synthesizing contextual data in implementation studies, as well as techniques for synthesizing qualitative and quantitative data.
Cultural evolutionary theory: How culture evolves and why it matters.
Creanza, Nicole; Kolodny, Oren; Feldman, Marcus W
2017-07-24
Human cultural traits-behaviors, ideas, and technologies that can be learned from other individuals-can exhibit complex patterns of transmission and evolution, and researchers have developed theoretical models, both verbal and mathematical, to facilitate our understanding of these patterns. Many of the first quantitative models of cultural evolution were modified from existing concepts in theoretical population genetics because cultural evolution has many parallels with, as well as clear differences from, genetic evolution. Furthermore, cultural and genetic evolution can interact with one another and influence both transmission and selection. This interaction requires theoretical treatments of gene-culture coevolution and dual inheritance, in addition to purely cultural evolution. In addition, cultural evolutionary theory is a natural component of studies in demography, human ecology, and many other disciplines. Here, we review the core concepts in cultural evolutionary theory as they pertain to the extension of biology through culture, focusing on cultural evolutionary applications in population genetics, ecology, and demography. For each of these disciplines, we review the theoretical literature and highlight relevant empirical studies. We also discuss the societal implications of the study of cultural evolution and of the interactions of humans with one another and with their environment.
Quantitative non-destructive testing
NASA Technical Reports Server (NTRS)
Welch, C. S.
1985-01-01
The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate.
Schaper, Louise; Pervan, Graham
2007-01-01
The research reported in this paper describes the development, empirical validation and analysis of a model of technology acceptance by Australian occupational therapists. The study described involved the collection of quantitative data through a national survey. The theoretical significance of this work is that it uses a thoroughly constructed research model, with one of the largest sample sizes ever tested (n=1605), to extend technology acceptance research into the health sector. Results provide strong support for the model. This work reveals the complexity of the constructs and relationships that influence technology acceptance and highlights the need to include sociotechnical and system issues in studies of technology acceptance in healthcare to improve information system implementation success in this arena. The results of this study have practical and theoretical implications for health informaticians and researchers in the field of health informatics and information systems, tertiary educators, Commonwealth and State Governments and the allied health professions.
Convection driven zonal flows and vortices in the major planets.
Busse, F. H.
1994-06-01
The dynamical properties of convection in rotating cylindrical annuli and spherical shells are reviewed. Simple theoretical models and experimental simulations of planetary convection through the use of the centrifugal force in the laboratory are emphasized. The model of columnar convection in a cylindrical annulus not only serves as a guide to the dynamical properties of convection in rotating sphere; it also is of interest as a basic physical system that exhibits several dynamical properties in their most simple form. The generation of zonal mean flows is discussed in some detail and examples of recent numerical computations are presented. The exploration of the parameter space for the annulus model is not yet complete and the theoretical exploration of convection in rotating spheres is still in the beginning phase. Quantitative comparisons with the observations of the dynamics of planetary atmospheres will have to await the consideration in the models of the effects of magnetic fields and the deviations from the Boussinesq approximation.
The quantitative modelling of human spatial habitability
NASA Technical Reports Server (NTRS)
Wise, James A.
1988-01-01
A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.
Chirumbolo, Antonio; Urbini, Flavio; Callea, Antonino; Lo Presti, Alessandro; Talamo, Alessandra
2017-01-01
One of the more visible effects of the societal changes is the increased feelings of uncertainty in the workforce. In fact, job insecurity represents a crucial occupational risk factor and a major job stressor that has negative consequences on both organizational well-being and individual health. Many studies have focused on the consequences about the fear and the perception of losing the job as a whole (called quantitative job insecurity), while more recently research has begun to examine more extensively the worries and the perceptions of losing valued job features (called qualitative job insecurity). The vast majority of the studies, however, have investigated the effects of quantitative and qualitative job insecurity separately. In this paper, we proposed the Job Insecurity Integrated Model aimed to examine the effects of quantitative job insecurity and qualitative job insecurity on their short-term and long-term outcomes. This model was empirically tested in two independent studies, hypothesizing that qualitative job insecurity mediated the effects of quantitative job insecurity on different outcomes, such as work engagement and organizational identification (Study 1), and job satisfaction, commitment, psychological stress and turnover intention (Study 2). Study 1 was conducted on 329 employees in private firms, while Study 2 on 278 employees in both public sector and private firms. Results robustly showed that qualitative job insecurity totally mediated the effects of quantitative on all the considered outcomes. By showing that the effects of quantitative job insecurity on its outcomes passed through qualitative job insecurity, the Job Insecurity Integrated Model contributes to clarifying previous findings in job insecurity research and puts forward a framework that could profitably produce new investigations with important theoretical and practical implications. PMID:29250013
Chirumbolo, Antonio; Urbini, Flavio; Callea, Antonino; Lo Presti, Alessandro; Talamo, Alessandra
2017-01-01
One of the more visible effects of the societal changes is the increased feelings of uncertainty in the workforce. In fact, job insecurity represents a crucial occupational risk factor and a major job stressor that has negative consequences on both organizational well-being and individual health. Many studies have focused on the consequences about the fear and the perception of losing the job as a whole (called quantitative job insecurity), while more recently research has begun to examine more extensively the worries and the perceptions of losing valued job features (called qualitative job insecurity). The vast majority of the studies, however, have investigated the effects of quantitative and qualitative job insecurity separately. In this paper, we proposed the Job Insecurity Integrated Model aimed to examine the effects of quantitative job insecurity and qualitative job insecurity on their short-term and long-term outcomes. This model was empirically tested in two independent studies, hypothesizing that qualitative job insecurity mediated the effects of quantitative job insecurity on different outcomes, such as work engagement and organizational identification (Study 1), and job satisfaction, commitment, psychological stress and turnover intention (Study 2). Study 1 was conducted on 329 employees in private firms, while Study 2 on 278 employees in both public sector and private firms. Results robustly showed that qualitative job insecurity totally mediated the effects of quantitative on all the considered outcomes. By showing that the effects of quantitative job insecurity on its outcomes passed through qualitative job insecurity, the Job Insecurity Integrated Model contributes to clarifying previous findings in job insecurity research and puts forward a framework that could profitably produce new investigations with important theoretical and practical implications.
Basto, Isabel; Pinheiro, Patrícia; Stiles, William B; Rijo, Daniel; Salgado, João
2017-07-01
The assimilation model describes the change process in psychotherapy. In this study we analyzed the relation of assimilation with changes in symptom intensity, measured session by session, and changes in emotional valence, measured for each emotional episode, in the case of a 33-year-old woman treated for depression with cognitive-behavioral therapy. Results showed the theoretically expected negative relation between assimilation of the client's main concerns and symptom intensity, and the relation between assimilation levels and emotional valence corresponded closely to the assimilation model's theoretical feelings curve. The results show how emotions work as markers of the client's current assimilation level, which could help the therapist adjust the intervention, moment by moment, to the client's needs.
NASA Astrophysics Data System (ADS)
Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.
2010-11-01
AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean's critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.
Lehnert, Teresa; Figge, Marc Thilo
2017-01-01
Mathematical modeling and computer simulations have become an integral part of modern biological research. The strength of theoretical approaches is in the simplification of complex biological systems. We here consider the general problem of receptor-ligand binding in the context of antibody-antigen binding. On the one hand, we establish a quantitative mapping between macroscopic binding rates of a deterministic differential equation model and their microscopic equivalents as obtained from simulating the spatiotemporal binding kinetics by stochastic agent-based models. On the other hand, we investigate the impact of various properties of B cell-derived receptors-such as their dimensionality of motion, morphology, and binding valency-on the receptor-ligand binding kinetics. To this end, we implemented an algorithm that simulates antigen binding by B cell-derived receptors with a Y-shaped morphology that can move in different dimensionalities, i.e., either as membrane-anchored receptors or as soluble receptors. The mapping of the macroscopic and microscopic binding rates allowed us to quantitatively compare different agent-based model variants for the different types of B cell-derived receptors. Our results indicate that the dimensionality of motion governs the binding kinetics and that this predominant impact is quantitatively compensated by the bivalency of these receptors.
Multi-scale Multi-mechanism Toughening of Hydrogels
NASA Astrophysics Data System (ADS)
Zhao, Xuanhe
Hydrogels are widely used as scaffolds for tissue engineering, vehicles for drug delivery, actuators for optics and fluidics, and model extracellular matrices for biological studies. The scope of hydrogel applications, however, is often severely limited by their mechanical properties. Inspired by the mechanics and hierarchical structures of tough biological tissues, we propose that a general principle for the design of tough hydrogels is to implement two mechanisms for dissipating mechanical energy and maintaining high elasticity in hydrogels. A particularly promising strategy for the design is to integrate multiple pairs of mechanisms across multiple length scales into a hydrogel. We develop a multiscale theoretical framework to quantitatively guide the design of tough hydrogels. On the network level, we have developed micro-physical models to characterize the evolution of polymer networks under deformation. On the continuum level, we have implemented constitutive laws formulated from the network-level models into a coupled cohesive-zone and Mullins-effect model to quantitatively predict crack propagation and fracture toughness of hydrogels. Guided by the design principle and quantitative model, we will demonstrate a set of new hydrogels, based on diverse types of polymers, yet can achieve extremely high toughness superior to their natural counterparts such as cartilages. The work was supported by NSF(No. CMMI- 1253495) and ONR (No. N00014-14-1-0528).
Lehnert, Teresa; Figge, Marc Thilo
2017-01-01
Mathematical modeling and computer simulations have become an integral part of modern biological research. The strength of theoretical approaches is in the simplification of complex biological systems. We here consider the general problem of receptor–ligand binding in the context of antibody–antigen binding. On the one hand, we establish a quantitative mapping between macroscopic binding rates of a deterministic differential equation model and their microscopic equivalents as obtained from simulating the spatiotemporal binding kinetics by stochastic agent-based models. On the other hand, we investigate the impact of various properties of B cell-derived receptors—such as their dimensionality of motion, morphology, and binding valency—on the receptor–ligand binding kinetics. To this end, we implemented an algorithm that simulates antigen binding by B cell-derived receptors with a Y-shaped morphology that can move in different dimensionalities, i.e., either as membrane-anchored receptors or as soluble receptors. The mapping of the macroscopic and microscopic binding rates allowed us to quantitatively compare different agent-based model variants for the different types of B cell-derived receptors. Our results indicate that the dimensionality of motion governs the binding kinetics and that this predominant impact is quantitatively compensated by the bivalency of these receptors. PMID:29250071
Quantitative interpretations of Visible-NIR reflectance spectra of blood.
Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H
2008-10-27
This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.
Towards Extending Forward Kinematic Models on Hyper-Redundant Manipulator to Cooperative Bionic Arms
NASA Astrophysics Data System (ADS)
Singh, Inderjeet; Lakhal, Othman; Merzouki, Rochdi
2017-01-01
Forward Kinematics is a stepping stone towards finding an inverse solution and subsequently a dynamic model of a robot. Hence a study and comparison of various Forward Kinematic Models (FKMs) is necessary for robot design. This paper deals with comparison of three FKMs on the same hyper-redundant Compact Bionic Handling Assistant (CBHA) manipulator under same conditions. The aim of this study is to project on modeling cooperative bionic manipulators. Two of these methods are quantitative methods, Arc Geometry HTM (Homogeneous Transformation Matrix) Method and Dual Quaternion Method, while the other one is Hybrid Method which uses both quantitative as well as qualitative approach. The methods are compared theoretically and experimental results are discussed to add further insight to the comparison. HTM is the widely used and accepted technique, is taken as reference and trajectory deviation in other techniques are compared with respect to HTM. Which method allows obtaining an accurate kinematic behavior of the CBHA, controlled in the real-time.
Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo
Dmitrieff, Serge; Rao, Madan; Sens, Pierre
2013-01-01
The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488
NASA Astrophysics Data System (ADS)
Anderson, O. Roger
The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.
The fluid mechanics of thrombus formation
NASA Technical Reports Server (NTRS)
1972-01-01
Experimental data are presented for the growth of thrombi (blood clots) in a stagnation point flow of fresh blood. Thrombus shape, size and structure are shown to depend on local flow conditions. The evolution of a thrombus is described in terms of a physical model that includes platelet diffusion, a platelet aggregation mechanism, and diffusion and convection of the chemical species responsible for aggregation. Diffusion-controlled and convection-controlled regimes are defined by flow parameters and thrombus location, and the characteristic growth pattern in each regime is explained. Quantitative comparisons with an approximate theoretical model are presented, and a more general model is formulated.
QSAR modeling of cumulative environmental end-points for the prioritization of hazardous chemicals.
Gramatica, Paola; Papa, Ester; Sangion, Alessandro
2018-01-24
The hazard of chemicals in the environment is inherently related to the molecular structure and derives simultaneously from various chemical properties/activities/reactivities. Models based on Quantitative Structure Activity Relationships (QSARs) are useful to screen, rank and prioritize chemicals that may have an adverse impact on humans and the environment. This paper reviews a selection of QSAR models (based on theoretical molecular descriptors) developed for cumulative multivariate endpoints, which were derived by mathematical combination of multiple effects and properties. The cumulative end-points provide an integrated holistic point of view to address environmentally relevant properties of chemicals.
Crellin, Nadia E.; Orrell, Martin; McDermott, Orii; Charlesworth, Georgina
2014-01-01
Objectives: This review aims to explore the role of self-efficacy (SE) in the health-related quality of life (QoL) of family carers of people with dementia. Methods: A systematic review of literature identified a range of qualitative and quantitative studies. Search terms related to caring, SE, and dementia. Narrative synthesis was adopted to synthesise the findings. Results: Twenty-two studies met the full inclusion criteria, these included 17 quantitative, four qualitative, and one mixed-method study. A model describing the role of task/domain-specific SE beliefs in family carer health-related QoL was constructed. This model was informed by review findings and discussed in the context of existing conceptual models of carer adaptation and empirical research. Review findings offer support for the application of the SE theory to caring and for the two-factor view of carer appraisals and well-being. Findings do not support the independence of the negative and positive pathways. The review was valuable in highlighting methodological challenges confronting this area of research, particularly the conceptualisation and measurement issues surrounding both SE and health-related QoL. Conclusions: The model might have theoretical implications in guiding future research and advancing theoretical models of caring. It might also have clinical implications in facilitating the development of carer support services aimed at improving SE. The review highlights the need for future research, particularly longitudinal research, and further exploration of domain/task-specific SE beliefs, the influence of carer characteristics, and other mediating/moderating variables. PMID:24943873
ERIC Educational Resources Information Center
Detering, Brad
2017-01-01
This research study, grounded in the theoretical framework of education change, used the Concerns-Based Adoption Model of change to examine the concerns of Illinois high school teachers and administrators regarding the implementation of 1:1 computing programs. A quantitative study of educators investigated the stages of concern and the mathematics…
NASA Astrophysics Data System (ADS)
Cottin, Hervé; Gazeau, Marie-Claire; Chaquin, Patrick; Raulin, François; Bénilan, Yves
2001-12-01
The ubiquity of molecular material in the universe, from hydrogen to complex organic matter, is the result of intermixed physicochemical processes that have occurred throughout history. In particular, the gas/solid/gas phase transformation cycle plays a key role in chemical evolution of organic matter from the interstellar medium to planetary systems. This paper focuses on two examples that are representative of the diversity of environments where such transformations occur in the Solar System: (1) the photolytic evolution from gaseous to solid material in methane containing planetary atmospheres and (2) the degradation of high molecular weight compounds into gas phase molecules in comets. We are currently developing two programs which couple experimental and theoretical studies. The aim of this research is to provide data necessary to build models in order to better understand (1) the photochemical evolution of Titan's atmosphere, through a laboratory program to determine quantitative spectroscopic data on long carbon chain molecules (polyynes) obtained in the SCOOP program (French acronym for Spectroscopy of Organic Compounds Oriented for Planetology), and (2) the extended sources in comets, through a laboratory program of quantitative studies of photochemical and thermal degradation processes on relevant polymers (e.g., Polyoxymethylene) by the SEMAPhOrE Cometaire program (French acronym for Experimental Simulation and Modeling Applied to Organic Chemistry in Cometary Environment).
Lerner, Eitan; Ploetz, Evelyn; Hohlbein, Johannes; Cordes, Thorben; Weiss, Shimon
2016-07-07
Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein-DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems.
Theoretical study of solvent effects on the coil-globule transition
NASA Astrophysics Data System (ADS)
Polson, James M.; Opps, Sheldon B.; Abou Risk, Nicholas
2009-06-01
The coil-globule transition of a polymer in a solvent has been studied using Monte Carlo simulations of a single chain subject to intramolecular interactions as well as a solvent-mediated effective potential. This solvation potential was calculated using several different theoretical approaches for two simple polymer/solvent models, each employing hard-sphere chains and hard-sphere solvent particles as well as attractive square-well potentials between some interaction sites. For each model, collapse is driven by variation in a parameter which changes the energy mismatch between monomers and solvent particles. The solvation potentials were calculated using two fundamentally different methodologies, each designed to predict the conformational behavior of polymers in solution: (1) the polymer reference interaction site model (PRISM) theory and (2) a many-body solvation potential (MBSP) based on scaled particle theory introduced by Grayce [J. Chem. Phys. 106, 5171 (1997)]. For the PRISM calculations, two well-studied solvation monomer-monomer pair potentials were employed, each distinguished by the closure relation used in its derivation: (i) a hypernetted-chain (HNC)-type potential and (ii) a Percus-Yevick (PY)-type potential. The theoretical predictions were each compared to results obtained from explicit-solvent discontinuous molecular dynamics simulations on the same polymer/solvent model systems [J. Chem. Phys. 125, 194904 (2006)]. In each case, the variation in the coil-globule transition properties with solvent density is mostly qualitatively correct, though the quantitative agreement between the theory and prediction is typically poor. The HNC-type potential yields results that are more qualitatively consistent with simulation. The conformational behavior of the polymer upon collapse predicted by the MBSP approach is quantitatively correct for low and moderate solvent densities but is increasingly less accurate for higher densities. At high solvent densities, the PRISM-HNC and MBSP approaches tend to overestimate, while the PRISM-PY approach underestimates the tendency of the solvent to drive polymer collapse.
Succurro, Antonella; Moejes, Fiona Wanjiku; Ebenhöh, Oliver
2017-08-01
The last few years have seen the advancement of high-throughput experimental techniques that have produced an extraordinary amount of data. Bioinformatics and statistical analyses have become instrumental to interpreting the information coming from, e.g., sequencing data and often motivate further targeted experiments. The broad discipline of "computational biology" extends far beyond the well-established field of bioinformatics, but it is our impression that more theoretical methods such as the use of mathematical models are not yet as well integrated into the research studying microbial interactions. The empirical complexity of microbial communities presents challenges that are difficult to address with in vivo / in vitro approaches alone, and with microbiology developing from a qualitative to a quantitative science, we see stronger opportunities arising for interdisciplinary projects integrating theoretical approaches with experiments. Indeed, the addition of in silico experiments, i.e., computational simulations, has a discovery potential that is, unfortunately, still largely underutilized and unrecognized by the scientific community. This minireview provides an overview of mathematical models of natural ecosystems and emphasizes that one critical point in the development of a theoretical description of a microbial community is the choice of problem scale. Since this choice is mostly dictated by the biological question to be addressed, in order to employ theoretical models fully and successfully it is vital to implement an interdisciplinary view at the conceptual stages of the experimental design. Copyright © 2017 Succurro et al.
Qualitative research and the epidemiological imagination: a vital relationship.
Popay, J
2003-01-01
This paper takes as its starting point the assumption that the 'Epidemiological Imagination' has a central role to play in the future development of policies and practice to improve population health and reduce health inequalities within and between states but suggests that by neglecting the contribution that qualitative research can make epidemiology is failing to deliver this potential. The paper briefly considers what qualitative research is, touching on epistemological questions--what type of "knowledge" is generated--and questions of methods--what approaches to data collection, analysis and interpretation are involved). Following this the paper presents two different models of the relationship between qualitative and quantitative research. The enhancement model (which assumes that qualitative research findings add something extra to the findings of quantitative research) suggests three related "roles" for qualitative research: generating hypothesis to be tested by quantitative research, helping to construct more sophisticated measures of social phenomena and explaining unexpected research from quantitative research. In contrast, the Epistemological Model suggests that qualitative research is equal but different from quantitative research making a unique contribution through: researching parts other research approaches can't reach, increasing understanding by adding conceptual and theoretical depth to knowledge, shifting the balance of power between researchers and researched and challenging traditional epidemiological ways of "knowing" the social world. The paper illustrates these different types of contributions with examples of qualitative research and finally discusses ways in which the "trustworthiness" of qualitative research can be assessed.
Rearrangement of valence neutrons in the neutrinoless double- β decay of Xe 136
Szwec, S. V.; Kay, B. P.; Cocolios, T. E.; ...
2016-11-15
Here, a quantitative description of the change in ground-state neutron occupancies between 136Xe and 136Ba, the initial and final state in the neutrinoless double-β decay of 136Xe, has been extracted from precision measurements of the cross sections of single-neutron-adding and -removing reactions. Comparisons are made to recent theoretical calculations of the same properties using various nuclear-structure models. These are the same calculations used to determine the magnitude of the nuclear matrix elements for the process, which at present disagree with each other by factors of 2 or 3. The experimental neutron occupancies show some disagreement with the theoretical calculations.
Borgese, L; Salmistraro, M; Gianoncelli, A; Zacco, A; Lucchini, R; Zimmerman, N; Pisani, L; Siviero, G; Depero, L E; Bontempi, E
2012-01-30
This work is presented as an improvement of a recently introduced method for airborne particulate matter (PM) filter analysis [1]. X-ray standing wave (XSW) and total reflection X-ray fluorescence (TXRF) were performed with a new dedicated laboratory instrumentation. The main advantage of performing both XSW and TXRF, is the possibility to distinguish the nature of the sample: if it is a small droplet dry residue, a thin film like or a bulk sample. Another advantage is related to the possibility to select the angle of total reflection to make TXRF measurements. Finally, the possibility to switch the X-ray source allows to measure with more accuracy lighter and heavier elements (with a change in X-ray anode, for example from Mo to Cu). The aim of the present study is to lay the theoretical foundation of the new proposed method for airborne PM filters quantitative analysis improving the accuracy and efficiency of quantification by means of an external standard. The theoretical model presented and discussed demonstrated that airborne PM filters can be considered as thin layers. A set of reference samples is prepared in laboratory and used to obtain a calibration curve. Our results demonstrate that the proposed method for quantitative analysis of air PM filters is affordable and reliable without the necessity to digest filters to obtain quantitative chemical analysis, and that the use of XSW improve the accuracy of TXRF analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
On the velocity distribution of ion jets during substorm recovery
NASA Technical Reports Server (NTRS)
Birn, J.; Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Paschmann, G.
1981-01-01
The velocity distribution of earthward jetting ions that are observed principally during substorm recovery by satellites at approximately 15-35 earth radii in the magnetotail is quantitatively compared with two different theoretical models - the 'adiabatic deformation' of an initially flowing Maxwellian moving into higher magnetic field strength (model A) and the field-aligned electrostatic acceleration of an initially nonflowing isotropic Maxwellian including adiabatic deformation effects (model B). The assumption is made that the ions are protons or, more generally, that they consist of only one species. It is found that both models can explain the often observed concave-convex shape of isodensity contours of the distribution function.
Field-theoretic approach to fluctuation effects in neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637
A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less
A model for the characterization of the spatial properties in vestibular neurons
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Bush, G. A.; Perachio, A. A.
1992-01-01
Quantitative study of the static and dynamic response properties of some otolith-sensitive neurons has been difficult in the past partly because their responses to different linear acceleration vectors exhibited no "null" plane and a dependence of phase on stimulus orientation. The theoretical formulation of the response ellipse provides a quantitative way to estimate the spatio-temporal properties of such neurons. Its semi-major axis gives the direction of the polarization vector (i.e., direction of maximal sensitivity) and it estimates the neuronal response for stimulation along that direction. In addition, the semi-minor axis of the ellipse provides an estimate of the neuron's maximal sensitivity in the "null" plane. In this paper, extracellular recordings from otolith-sensitive vestibular nuclei neurons in decerebrate rats were used to demonstrate the practical application of the method. The experimentally observed gain and phase dependence on the orientation angle of the acceleration vector in a head-horizontal plane was described and satisfactorily fit by the response ellipse model. In addition, the model satisfactorily fits neuronal responses in three-dimensions and unequivocally demonstrates that the response ellipse formulation is the general approach to describe quantitatively the spatial properties of vestibular neurons.
QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.
Nilsen, Vegard; Wyller, John
2016-01-01
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.
Batista Ferrer, Harriet; Audrey, Suzanne; Trotter, Caroline; Hickman, Matthew
2015-01-01
Background Interventions to increase uptake of Human Papillomavirus (HPV) vaccination by young women may be more effective if they are underpinned by an appropriate theoretical model or framework. The aims of this review were: to describe the theoretical models or frameworks used to explain behaviours in relation to HPV vaccination of young women, and: to consider the appropriateness of the theoretical models or frameworks used for informing the development of interventions to increase uptake. Methods Primary studies were identified through a comprehensive search of databases from inception to December 2013. Results Thirty-four relevant studies were identified, of which 31 incorporated psychological health behaviour models or frameworks and three used socio-cultural models or theories. The primary studies used a variety of approaches to measure a diverse range of outcomes in relation to behaviours of professionals, parents, and young women. The majority appeared to use theory appropriately throughout. About half of the quantitative studies presented data in relation to goodness of fit tests and the proportion of the variability in the data. Conclusion Due to diverse approaches and inconsistent findings across studies, the current contribution of theory to understanding and promoting HPV vaccination uptake is difficult to assess. Ecological frameworks encourage the integration of individual and social approaches by encouraging exploration of the intrapersonal, interpersonal, organisational, community and policy levels when examining public health issues. Given the small number of studies using such approach, combined with the importance of these factors in predicting behaviour, more research in this area is warranted. PMID:26314783
Qualitative and Quantitative Distinctions in Personality Disorder
Wright, Aidan G. C.
2011-01-01
The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
Bridging scales in the evolution of infectious disease life histories: theory.
Day, Troy; Alizon, Samuel; Mideo, Nicole
2011-12-01
A significant goal of recent theoretical research on pathogen evolution has been to develop theory that bridges within- and between-host dynamics. The main approach used to date is one that nests within-host models of pathogen replication in models for the between-host spread of infectious diseases. Although this provides an elegant approach, it nevertheless suffers from some practical difficulties. In particular, the information required to satisfactorily model the mechanistic details of the within-host dynamics is not often available. Here, we present a theoretical approach that circumvents these difficulties by quantifying the relevant within-host factors in an empirically tractable way. The approach is closely related to quantitative genetic models for function-valued traits, and it also allows for the prediction of general characteristics of disease life history, including the timing of virulence, transmission, and host recovery. In a companion paper, we illustrate the approach by applying it to data from a model system of malaria. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Li, Jia; Lu, Hongzhou; Xu, Zhenming; Zhou, Yaohe
2008-06-15
Waste printed circuit board (PCB) is increasing worldwide. The corona electrostatic separation (CES) was an effective and environmental protection way to recycle resource from waste PCBs. The aim of this paper is to analyze the main factor (rotational speed) that affects the efficiency of CES from the point of view of electrostatics and mechanics. A quantitative method for analyzing the affection of rotational speed was studied and the model for separating flat nonmetal particles in waste PCBs was established. The conception of "charging critical rotational speed" and "detaching critical rotational speed" were presented. Experiments with the waste PCBs verified the theoretical model, and the experimental results were in good agreement with the theoretical model. The results indicated that the purity and recycle percentage of materials got a good level when the rotational speed was about 70 rpm and the critical rotational speed of small particles was higher than big particles. The model can guide the definition of operator parameter and the design of CES, which are needed for the development of any new application of the electrostatic separation method.
Xu, Liyuan; Gao, Haoshi; Li, Liangxing; Li, Yinnong; Wang, Liuyun; Gao, Chongkai; Li, Ning
2016-12-23
The effective permeability coefficient is of theoretical and practical importance in evaluation of the bioavailability of drug candidates. However, most methods currently used to measure this coefficient are expensive and time-consuming. In this paper, we addressed these problems by proposing a new measurement method which is based on the microemulsion liquid chromatography. First, the parallel artificial membrane permeability assays model was used to determine the effective permeability of drug so that quantitative retention-activity relationships could be established, which were used to optimize the microemulsion liquid chromatography. The most effective microemulsion system used a mobile phase of 6.0% (w/w) Brij35, 6.6% (w/w) butanol, 0.8% (w/w) octanol, and 86.6% (w/w) phosphate buffer (pH 7.4). Next, support vector machine and back-propagation neural networks are employed to develop a quantitative retention-activity relationships model associated with the optimal microemulsion system, and used to improve the prediction ability. Finally, an adequate correlation between experimental value and predicted value is computed to verify the performance of the optimal model. The results indicate that the microemulsion liquid chromatography can serve as a possible alternative to the PAMPA method for determination of high-throughput permeability and simulation of biological processes. Copyright © 2016. Published by Elsevier B.V.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
The Effect of Molecular Orientation to Solid-Solid and Melting Transitions
NASA Astrophysics Data System (ADS)
Yazici, Mustafa; Özgan, Şükrü
The thermodynamics of solid-solid and solid-liquid transitions are investigated with an account of the number of molecular orientation. The variations of the positional and orientational orders with the reduced temperature are studied. It is found out that orientational order parameter is very sensitive to the number of allowed orientation. The reduced transition temperatures, volume changes and entropy changes of the phase transitions and theoretical phase diagrams are obtained. The entropy changes of melting transitions for different numbers of allowed orientation of the present model are compared with the theoretical results and some experimental data. The quantitative predictions of the model are compared with experimental results for plastic crystals and agreement between predictions of the model and the experimental results are approximately good. Also, different numbers of allowed orientation D correspond to different experimental results HI, HBr, H2S for D = 2; HBr, CCl4, HI for D = 4; C2H12 for D = 6; CH4, PH3 for D = 20.
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
A mathematical representation for the electromagnetic force field and the fluid flow field in a coreless induction furnace is presented. The fluid flow field was represented by writing the axisymmetric turbulent Navier-Stokes equation, containing the electromagnetic body force term. The electromagnetic body force field was calculated by using a technique of mutual inductances. The kappa-epsilon model was employed for evaluating the turbulent viscosity and the resultant differential equations were solved numerically. Theoretically predicted velocity fields are in reasonably good agreement with the experimental measurements reported by Hunt and Moore; furthermore, the agreement regarding the turbulent intensities are essentially quantitative. These results indicate that the kappa-epsilon model provides a good engineering representation of the turbulent recirculating flows occurring in induction furnaces. At this stage it is not clear whether the discrepancies between measurements and the predictions, which were not very great in any case, are attributable either to the model or to the measurement techniques employed.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
Coherent beam combination of fiber lasers with a strongly confined waveguide: numerical model.
Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin
2012-08-20
Self-imaging properties of fiber lasers in a strongly confined waveguide (SCW) and their application in coherent beam combination (CBC) are studied theoretically. Analytical formulas are derived for the positions, amplitudes, and phases of the N images at the end of an SCW, which is important for quantitative analysis of waveguide CBC. The formulas are verified with experimental results and numerical simulation of a finite difference beam propagation method (BPM). The error of our analytical formulas is less than 6%, which can be reduced to less than 1.5% with Goos-Hahnchen penetration depth considered. Based on the theoretical model and BPM, we studied the combination of two laser beams based on an SCW. The effects of the waveguide refractive index and Gaussian beam waist are studied. We also simulated the CBC of nine and 16 fiber lasers, and a single beam without side lobes was achieved.
[Risk, uncertainty and ignorance in medicine].
Rørtveit, G; Strand, R
2001-04-30
Exploration of healthy patients' risk factors for disease has become a major medical activity. The rationale behind primary prevention through exploration and therapeutic risk reduction is not separated from the theoretical assumption that every form of uncertainty can be expressed as risk. Distinguishing "risk" (as quantitative probabilities in a known sample space), "strict uncertainty" (when the sample space is known, but probabilities of events cannot be quantified) and "ignorance" (when the sample space is not fully known), a typical clinical situation (primary risk of coronary disease) is analysed. It is shown how strict uncertainty and sometimes ignorance can be present, in which case the orthodox decision theoretical rationale for treatment breaks down. For use in such cases, a different ideal model of rationality is proposed, focusing on the patient's considered reasons. This model has profound implications for the current understanding of medical professionalism as well as for the design of clinical guidelines.
Direct measurement of Kramers turnover with a levitated nanoparticle
NASA Astrophysics Data System (ADS)
Rondin, Loïc; Gieseler, Jan; Ricci, Francesco; Quidant, Romain; Dellago, Christoph; Novotny, Lukas
2017-12-01
Understanding the thermally activated escape from a metastable state is at the heart of important phenomena such as the folding dynamics of proteins, the kinetics of chemical reactions or the stability of mechanical systems. In 1940, Kramers calculated escape rates both in the high damping and low damping regimes, and suggested that the rate must have a maximum for intermediate damping. This phenomenon, today known as the Kramers turnover, has triggered important theoretical and numerical studies. However, as yet, there is no direct and quantitative experimental verification of this turnover. Using a nanoparticle trapped in a bistable optical potential, we experimentally measure the nanoparticle's transition rates for variable damping and directly resolve the Kramers turnover. Our measurements are in agreement with an analytical model that is free of adjustable parameters. The levitated nanoparticle presented here is a versatile experimental platform for studying and simulating a wide range of stochastic processes and testing theoretical models and predictions.
NASA Astrophysics Data System (ADS)
Takakura, T.; Yanagi, I.; Goto, Y.; Ishige, Y.; Kohara, Y.
2016-03-01
We developed a resistive-pulse sensor with a solid-state pore and measured the latex agglutination of submicron particles induced by antigen-antibody interaction for single-molecule detection of proteins. We fabricated the pore based on numerical simulation to clearly distinguish between monomer and dimer latex particles. By measuring single dimers agglutinated in the single-molecule regime, we detected single human alpha-fetoprotein molecules. Adjusting the initial particle concentration improves the limit of detection (LOD) to 95 fmol/l. We established a theoretical model of the LOD by combining the reaction kinetics and the counting statistics to explain the effect of initial particle concentration on the LOD. The theoretical model shows how to improve the LOD quantitatively. The single-molecule detection studied here indicates the feasibility of implementing a highly sensitive immunoassay by a simple measurement method using resistive-pulse sensing.
Lannert, Brittany K
2015-07-01
Vicarious traumatization of nonvictim members of communities targeted by bias crimes has been suggested by previous qualitative studies and often dominates public discussion following bias events, but proximal and distal responses of community members have yet to be comprehensively modeled, and quantitative research on vicarious responses is scarce. This comprehensive review integrates theoretical and empirical literatures in social, clinical, and physiological psychology in the development of a model of affective, cognitive, and physiological responses of lesbian, gay, and bisexual individuals upon exposure to information about bias crimes. Extant qualitative research in vicarious response to bias crimes is reviewed in light of theoretical implications and methodological limitations. Potential pathways to mental health outcomes are outlined, including accumulative effects of anticipatory defensive responding, multiplicative effects of minority stress, and putative traumatogenic physiological and cognitive processes of threat. Methodological considerations, future research directions, and clinical implications are also discussed. © The Author(s) 2014.
Rotational and frictional dynamics of the slamming of a door
NASA Astrophysics Data System (ADS)
Klein, Pascal; Müller, Andreas; Gröber, Sebastian; Molz, Alexander; Kuhn, Jochen
2017-01-01
A theoretical and experimental investigation of the rotational dynamics, including friction, of a slamming door is presented. Based on existing work regarding different damping models for rotational and oscillatory motions, we examine different forms for the (angular) velocity dependence (ωn, n = 0, 1, 2) of the frictional force. An analytic solution is given when all three friction terms are present and several solutions for specific cases known from the literature are reproduced. The motion of a door is investigated experimentally using a smartphone, and the data are compared with the theoretical results. A laboratory experiment under more controlled conditions is conducted to gain a deeper understanding of the movement of a slammed door. Our findings provide quantitative evidence that damping models involving quadratic air drag are most appropriate for the slamming of a door. Examining this everyday example of a physical phenomenon increases student motivation, because they can relate it to their own personal experience.
Stenberg, Nicola; Furness, Penny J
2017-03-01
The outcomes of self-management interventions are commonly assessed using quantitative measurement tools, and few studies ask people with long-term conditions to explain, in their own words, what aspects of the intervention they valued. In this Grounded Theory study, a Health Trainers service in the north of England was evaluated based on interviews with eight service-users. Open, focused, and theoretical coding led to the development of a preliminary model explaining participants' experiences and perceived impact of the service. The model reflects the findings that living well with a long-term condition encompassed social connectedness, changed identities, acceptance, and self-care. Health trainers performed four related roles that were perceived to contribute to these outcomes: conceptualizer, connector, coach, and champion. The evaluation contributes a grounded theoretical understanding of a personalized self-management intervention that emphasizes the benefits of a holistic approach to enable cognitive, behavioral, emotional, and social adjustments.
Outcome regimes of binary raindrop collisions
NASA Astrophysics Data System (ADS)
Testik, Firat Y.
2009-11-01
This study delineates the physical conditions that are responsible for the occurrence of main outcome regimes (i.e., bounce, coalescence, and breakup) for binary drop collisions with a precipitation microphysics perspective. Physical considerations based on the collision kinetic energy and the surface energies of the colliding drops lead to the development of a theoretical regime diagram for the drop/raindrop collision outcomes in the We- p plane ( We — Weber number, p — raindrop diameter ratio). This theoretical regime diagram is supported by laboratory experimental observations of drop collisions using high-speed imaging. Results of this fundamental study bring in new insights into the quantitative understanding of drop dynamics, applications of which extend beyond precipitation microphysics. In particular, results of this drop collision study are expected to give impetus to the physics-based dynamic modeling of the drop size distributions that is essential for various typical modern engineering applications, including numerical modeling of evolution of raindrop size distribution in rain shaft.
Suppression of thermal frequency noise in erbium-doped fiber random lasers.
Saxena, Bhavaye; Bao, Xiaoyi; Chen, Liang
2014-02-15
Frequency and intensity noise are characterized for erbium-doped fiber (EDF) random lasers based on Rayleigh distributed feedback mechanism. We propose a theoretical model for the frequency noise of such random lasers using the property of random phase modulations from multiple scattering points in ultralong fibers. We find that the Rayleigh feedback suppresses the noise at higher frequencies by introducing a Lorentzian envelope over the thermal frequency noise of a long fiber cavity. The theoretical model and measured frequency noise agree quantitatively with two fitting parameters. The random laser exhibits a noise level of 6 Hz²/Hz at 2 kHz, which is lower than what is found in conventional narrow-linewidth EDF fiber lasers and nonplanar ring laser oscillators (NPROs) by a factor of 166 and 2, respectively. The frequency noise has a minimum value for an optimum length of the Rayleigh scattering fiber.
A quantum probability perspective on borderline vagueness.
Blutner, Reinhard; Pothos, Emmanuel M; Bruza, Peter
2013-10-01
The term "vagueness" describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. © 2013 Cognitive Science Society, Inc.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
[Use of theories and models on papers of a Latin-American journal in public health, 2000 to 2004].
Cabrera Arana, Gustavo Alonso
2007-12-01
To characterize frequency and type of use of theories or models on papers of a Latin-American journal in public health between 2000 and 2004. The Revista de Saúde Pública was chosen because of its history of periodic publication without interruption and current impact on the scientific communication of the area. A standard procedure was applied for reading and classifying articles in an arbitrary typology of four levels, according to the depth of the use of models or theoretical references to describe problems or issues, to formulate methods and to discuss results. Of 482 articles included, 421 (87%) were research studies, 42 (9%) reviews or special contributions and 19 (4%) opinion texts or assays . Of 421 research studies, 286 (68%) had a quantitative focus, 110 (26%) qualitative and 25 (6%) mixed. Reference to theories or models is uncommon, only 90 (19%) articles mentioned a theory or model. According to the depth of the use, 29 (6%) were classified as type I, 9 (2%) as type II, 6 (1.3%) were type III and the 46 remaining texts (9.5%) were type IV. Reference to models was nine-fold more frequent than the use of theoretical references. The ideal use, type IV, occurred in one of every ten articles studied. It is of relevance to show theoretical and models frames used when approaching topics, formulating hypothesis, designing methods and discussing findings in papers.
Hutchings, Maggie; Scammell, Janet; Quinney, Anne
2013-09-01
While there is growing evidence of theoretical perspectives adopted in interprofessional education, learning theories tend to foreground the individual, focusing on psycho-social aspects of individual differences and professional identity to the detriment of considering social-structural factors at work in social practices. Conversely socially situated practice is criticised for being context-specific, making it difficult to draw generalisable conclusions for improving interprofessional education. This article builds on a theoretical framework derived from earlier research, drawing on the dynamics of Dewey's experiential learning theory and Archer's critical realist social theory, to make a case for a meta-theoretical framework enabling social-constructivist and situated learning theories to be interlinked and integrated through praxis and reflexivity. Our current analysis is grounded in an interprofessional curriculum initiative mediated by a virtual community peopled by health and social care users. Student perceptions, captured through quantitative and qualitative data, suggest three major disruptive themes, creating opportunities for congruence and disjuncture and generating a model of zones of interlinked praxis associated with professional differences and identity, pedagogic strategies and technology-mediated approaches. This model contributes to a framework for understanding the complexity of interprofessional learning and offers bridges between individual and structural factors for engaging with the enablements and constraints at work in communities of practice and networks for interprofessional education.
2009-01-01
Army Institute of Research, 503 Robert Grant AVenue, SilVer Spring, Maryland 20910, and Center for AdVanced Studies and Department of Toxicology ...Department of Toxicology , Faculty of Military Health Sciences. Chem. Res. Toxicol. XXXX, , 000 A 10.1021/tx900192u XXXX American Chemical Society...GA-inhibited AChE derived from theoretical stereoelectronic and three-dimensional (3D) quantitative struc- ture-activity relationship ( QSAR
Influence of field dependent critical current density on flux profiles in high Tc superconductors
NASA Technical Reports Server (NTRS)
Takacs, S.
1990-01-01
The field distribution for superconducting cylinders and slabs with field dependent critical current densities in combined DC and AC magnetic fields and the corresponding magnetic fluxes are calculated. It is shown that all features of experimental magnetic-field profile measurements can be explained in the framework of field dependent critical current density. Even the quantitative agreement between the experimental and theoretical results using Kim's model is very good.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Daniel L., E-mail: dlsilva.physics@gmail.com, E-mail: deboni@ifsc.usp.br; Instituto de Física, Universidade de São Paulo, CP 66318, 05314-970 São Paulo, SP; Fonseca, Ruben D.
2015-02-14
This paper reports on the static and dynamic first-order hyperpolarizabilities of a class of push-pull octupolar triarylamine derivatives dissolved in toluene. We have combined hyper-Rayleigh scattering experiment and the coupled perturbed Hartree-Fock method implemented at the Density Functional Theory (DFT) level of theory to determine the static and dynamic (at 1064 nm) first-order hyperpolarizability (β{sub HRS}) of nine triarylamine derivatives with distinct electron-withdrawing groups. In four of these derivatives, an azoaromatic unit is inserted and a pronounceable increase of the first-order hyperpolarizability is reported. Based on the theoretical results, the dipolar/octupolar character of the derivatives is determined. By using amore » polarizable continuum model in combination with the DFT calculations, it was found that although solvated in an aprotic and low dielectric constant solvent, due to solvent-induced polarization and the frequency dispersion effect, the environment substantially affects the first-order hyperpolarizability of all derivatives investigated. This statement is supported due to the solvent effects to be essential for the better agreement between theoretical results and experimental data concerning the dynamic first-order hyperpolarizability of the derivatives. The first-order hyperpolarizability of the derivatives was also modeled using the two- and three-level models, where the relationship between static and dynamic first hyperpolarizabilities is given by a frequency dispersion model. Using this approach, it was verified that the dynamic first hyperpolarizability of the derivatives is satisfactorily reproduced by the two-level model and that, in the case of the derivatives with an azoaromatic unit, the use of a damped few-level model is essential for, considering also the molecular size of such derivatives, a good quantitative agreement between theoretical results and experimental data to be observed.« less
NASA Technical Reports Server (NTRS)
Holley, W. R.; Chatterjee, A.
1996-01-01
We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber comprised of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and delta rays due to knock-on collisions involving energy transfers >100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of OH, H, eaq, etc.; (2) OH attack on sugar molecules leading to strand breaks: (3) OH attack on bases; (4) direct ionization of the sugar molecules leading to strand breaks; (5) direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 bp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. The shapes of the spectra of DNA fragment lengths depend on the symmetries or approximate symmetries of the chromatin structure. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper (B. Rydberg, Radiat, Res. 145, 200-209, 1996) after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the chromatin fibers in mammalian DNA.
Pramanick, Abhijit; Shapiro, Steve M.; Glavic, Artur; ...
2015-10-14
In this study, ferromagnetic shape memory alloys (FSMAs) have shown great potential as active components in next generation smart devices due to their exceptionally large magnetic-field-induced strains and fast response times. During application of magnetic fields in FSMAs, as is common in several magnetoelastic smart materials, there occurs simultaneous rotation of magnetic moments and reorientation of twin variants, resolving which, although critical for design of new materials and devices, has been difficult to achieve quantitatively with current characterization methods. At the same time, theoretical modeling of these phenomena also faced limitations due to uncertainties in values of physical properties suchmore » as magnetocrystalline anisotropy energy (MCA), especially for off-stoichiometric FSMA compositions. Here, in situ polarized neutron diffraction is used to measure directly the extents of both magnetic moments rotation and crystallographic twin-reorientation in an FSMA single crystal during the application of magnetic fields. Additionally, high-resolution neutron scattering measurements and first-principles calculations based on fully relativistic density functional theory are used to determine accurately the MCA for the compositionally disordered alloy of Ni 2Mn 1.14Ga 0.86. The results from these state-of-the-art experiments and calculations are self-consistently described within a phenomenological framework, which provides quantitative insights into the energetics of magnetostructural coupling in FSMAs. Based on the current model, the energy for magnetoelastic twin boundaries propagation for the studied alloy is estimated to be ~150kJ/m 3.« less
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Zhao, Zhigang; Chen, Dongkui; Liu, Yuping
2005-01-01
Although many methods, such as bacteria plate count, flow cytometry and impedance method have been broadly used in the dairy industry to quantitate bacteria numbers around the world, none of them is a quick, low cost and easy one. In this study, we proposed to apply the color difference theory in this field to establish a mathematic model to quantitate bacteria number in fresh milk. Preliminary testing results not only indicate that the application of the color difference theory to the new system is practical, but also confirm the theoretical relationship between the numbers of bacteria, incubation time and color difference. The proof of the principal study in this article further suggests that the novel method has the potential to replace the traditional methods to determine bacteria numbers for the food industry.
Stevanović, Nikola R; Perušković, Danica S; Gašić, Uroš M; Antunović, Vesna R; Lolić, Aleksandar Đ; Baošić, Rada M
2017-03-01
The objectives of this study were to gain insights into structure-retention relationships and to propose the model to estimating their retention. Chromatographic investigation of series of 36 Schiff bases and their copper(II) and nickel(II) complexes was performed under both normal- and reverse-phase conditions. Chemical structures of the compounds were characterized by molecular descriptors which are calculated from the structure and related to the chromatographic retention parameters by multiple linear regression analysis. Effects of chelation on retention parameters of investigated compounds, under normal- and reverse-phase chromatographic conditions, were analyzed by principal component analysis, quantitative structure-retention relationship and quantitative structure-activity relationship models were developed on the basis of theoretical molecular descriptors, calculated exclusively from molecular structure, and parameters of retention and lipophilicity. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Isono, Hiroshi; Hirata, Shinnosuke; Hachiya, Hiroyuki
2015-07-01
In medical ultrasonic images of liver disease, a texture with a speckle pattern indicates a microscopic structure such as nodules surrounded by fibrous tissues in hepatitis or cirrhosis. We have been applying texture analysis based on a co-occurrence matrix to ultrasonic images of fibrotic liver for quantitative tissue characterization. A co-occurrence matrix consists of the probability distribution of brightness of pixel pairs specified with spatial parameters and gives new information on liver disease. Ultrasonic images of different types of fibrotic liver were simulated and the texture-feature contrast was calculated to quantify the co-occurrence matrices generated from the images. The results show that the contrast converges with a value that can be theoretically estimated using a multi-Rayleigh model of echo signal amplitude distribution. We also found that the contrast value increases as liver fibrosis progresses and fluctuates depending on the size of fibrotic structure.
Project risk management in the construction of high-rise buildings
NASA Astrophysics Data System (ADS)
Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya
2018-03-01
This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.
Extension of nanoconfined DNA: Quantitative comparison between experiment and theory
NASA Astrophysics Data System (ADS)
Iarko, V.; Werner, E.; Nyberg, L. K.; Müller, V.; Fritzsche, J.; Ambjörnsson, T.; Beech, J. P.; Tegenfeldt, J. O.; Mehlig, K.; Westerlund, F.; Mehlig, B.
2015-12-01
The extension of DNA confined to nanochannels has been studied intensively and in detail. However, quantitative comparisons between experiments and model calculations are difficult because most theoretical predictions involve undetermined prefactors, and because the model parameters (contour length, Kuhn length, effective width) are difficult to compute reliably, leading to substantial uncertainties. Here we use a recent asymptotically exact theory for the DNA extension in the "extended de Gennes regime" that allows us to compare experimental results with theory. For this purpose, we performed experiments measuring the mean DNA extension and its standard deviation while varying the channel geometry, dye intercalation ratio, and ionic strength of the buffer. The experimental results agree very well with theory at high ionic strengths, indicating that the model parameters are reliable. At low ionic strengths, the agreement is less good. We discuss possible reasons. In principle, our approach allows us to measure the Kuhn length and the effective width of a single DNA molecule and more generally of semiflexible polymers in solution.
Quantitative Theoretical and Conceptual Framework Use in Agricultural Education Research
ERIC Educational Resources Information Center
Kitchel, Tracy; Ball, Anna L.
2014-01-01
The purpose of this philosophical paper was to articulate the disciplinary tenets for consideration when using theory in agricultural education quantitative research. The paper clarified terminology around the concept of theory in social sciences and introduced inaccuracies of theory use in agricultural education quantitative research. Finally,…
Research of laser echo signal simulator
NASA Astrophysics Data System (ADS)
Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou
2015-11-01
Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.
Redistribution by insurance market regulation: Analyzing a ban on gender-based retirement annuities.
Finkelstein, Amy; Poterba, James; Rothschild, Casey
2009-01-01
We illustrate how equilibrium screening models can be used to evaluate the economic consequences of insurance market regulation. We calibrate and solve a model of the United Kingdom's compulsory annuity market and examine the impact of gender-based pricing restrictions. We find that the endogenous adjustment of annuity contract menus in response to such restrictions can undo up to half of the redistribution from men to women that would occur with exogenous Social Security-like annuity contracts. Our findings indicate the importance of endogenous contract responses and illustrate the feasibility of employing theoretical insurance market equilibrium models for quantitative policy analysis.
Redistribution by insurance market regulation: Analyzing a ban on gender-based retirement annuities
Finkelstein, Amy; Poterba, James; Rothschild, Casey
2009-01-01
We illustrate how equilibrium screening models can be used to evaluate the economic consequences of insurance market regulation. We calibrate and solve a model of the United Kingdom’s compulsory annuity market and examine the impact of gender-based pricing restrictions. We find that the endogenous adjustment of annuity contract menus in response to such restrictions can undo up to half of the redistribution from men to women that would occur with exogenous Social Security-like annuity contracts. Our findings indicate the importance of endogenous contract responses and illustrate the feasibility of employing theoretical insurance market equilibrium models for quantitative policy analysis. PMID:20046907
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nargotra, Amit; Sharma, Sujata; Koul, Jawahir Lal; Sangwan, Pyare Lal; Khan, Inshad Ali; Kumar, Ashwani; Taneja, Subhash Chander; Koul, Surrinder
2009-10-01
Quantitative structure activity relationship (QSAR) analysis of piperine analogs as inhibitors of efflux pump NorA from Staphylococcus aureus has been performed in order to obtain a highly accurate model enabling prediction of inhibition of S. aureus NorA of new chemical entities from natural sources as well as synthetic ones. Algorithm based on genetic function approximation method of variable selection in Cerius2 was used to generate the model. Among several types of descriptors viz., topological, spatial, thermodynamic, information content and E-state indices that were considered in generating the QSAR model, three descriptors such as partial negative surface area of the compounds, area of the molecular shadow in the XZ plane and heat of formation of the molecules resulted in a statistically significant model with r(2)=0.962 and cross-validation parameter q(2)=0.917. The validation of the QSAR models was done by cross-validation, leave-25%-out and external test set prediction. The theoretical approach indicates that the increase in the exposed partial negative surface area increases the inhibitory activity of the compound against NorA whereas the area of the molecular shadow in the XZ plane is inversely proportional to the inhibitory activity. This model also explains the relationship of the heat of formation of the compound with the inhibitory activity. The model is not only able to predict the activity of new compounds but also explains the important regions in the molecules in quantitative manner.
Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing
NASA Astrophysics Data System (ADS)
Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin
2017-06-01
This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.
Shape and fission instabilities of ferrofluids in non-uniform magnetic fields
NASA Astrophysics Data System (ADS)
Vieu, Thibault; Walter, Clément
2018-04-01
We study static distributions of ferrofluid submitted to non-uniform magnetic fields. We show how the normal-field instability is modified in the presence of a weak magnetic field gradient. Then we consider a ferrofluid droplet and show how the gradient affects its shape. A rich phase transitions phenomenology is found. We also investigate the creation of droplets by successive splits when a magnet is vertically approached from below and derive theoretical expressions which are solved numerically to obtain the number of droplets and their aspect ratio as function of the field configuration. A quantitative comparison is performed with previous experimental results, as well as with our own experiments, and yields good agreement with the theoretical modeling.
Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A
2017-02-07
A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.
Information-theoretic approach to interactive learning
NASA Astrophysics Data System (ADS)
Still, S.
2009-01-01
The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
NASA Astrophysics Data System (ADS)
Morozov, A. N.
2017-11-01
The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
2017-01-01
Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847
Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír
2012-05-21
In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.
Direct observation of surface-state thermal oscillations in SmB6 oscillators
NASA Astrophysics Data System (ADS)
Casas, Brian; Stern, Alex; Efimkin, Dmitry K.; Fisk, Zachary; Xia, Jing
2018-01-01
SmB6 is a mixed valence Kondo insulator that exhibits a sharp increase in resistance following an activated behavior that levels off and saturates below 4 K. This behavior can be explained by the proposal of SmB6 representing a new state of matter, a topological Kondo insulator, in which a Kondo gap is developed, and topologically protected surface conduction dominates low-temperature transport. Exploiting its nonlinear dynamics, a tunable SmB6 oscillator device was recently demonstrated, where a small dc current generates large oscillating voltages at frequencies from a few Hz to hundreds of MHz. This behavior was explained by a theoretical model describing the thermal and electronic dynamics of coupled surface and bulk states. However, a crucial aspect of this model, the predicted temperature oscillation in the surface state, has not been experimentally observed to date. This is largely due to the technical difficulty of detecting an oscillating temperature of the very thin surface state. Here we report direct measurements of the time-dependent surface-state temperature in SmB6 with a RuO2 microthermometer. Our results agree quantitatively with the theoretically simulated temperature waveform, and hence support the validity of the oscillator model, which will provide accurate theoretical guidance for developing future SmB6 oscillators at higher frequencies.
Evaluation of DNA Force Fields in Implicit Solvation
Gaillard, Thomas; Case, David A.
2011-01-01
DNA structural deformations and dynamics are crucial to its interactions in the cell. Theoretical simulations are essential tools to explore the structure, dynamics, and thermodynamics of biomolecules in a systematic way. Molecular mechanics force fields for DNA have benefited from constant improvements during the last decades. Several studies have evaluated and compared available force fields when the solvent is modeled by explicit molecules. On the other hand, few systematic studies have assessed the quality of duplex DNA models when implicit solvation is employed. The interest of an implicit modeling of the solvent consists in the important gain in the simulation performance and conformational sampling speed. In this study, respective influences of the force field and the implicit solvation model choice on DNA simulation quality are evaluated. To this end, extensive implicit solvent duplex DNA simulations are performed, attempting to reach both conformational and sequence diversity convergence. Structural parameters are extracted from simulations and statistically compared to available experimental and explicit solvation simulation data. Our results quantitatively expose the respective strengths and weaknesses of the different DNA force fields and implicit solvation models studied. This work can lead to the suggestion of improvements to current DNA theoretical models. PMID:22043178
Analysis of Market Opportunities for Chinese Private Express Delivery Industry
NASA Astrophysics Data System (ADS)
Jiang, Changbing; Bai, Lijun; Tong, Xiaoqing
China's express delivery market has become the arena in which each express enterprise struggles to chase due to the huge potential demand and high profitable prospects. So certain qualitative and quantitative forecast for the future changes of China's express delivery market will help enterprises understand various types of market conditions and social changes in demand and adjust business activities to enhance their competitiveness timely. The development of China's express delivery industry is first introduced in this chapter. Then the theoretical basis of the regression model is overviewed. We also predict the demand trends of China's express delivery market by using Pearson correlation analysis and regression analysis from qualitative and quantitative aspects, respectively. Finally, we draw some conclusions and recommendations for China's express delivery industry.
NASA Astrophysics Data System (ADS)
Baliukin, I. I.; Izmodenov, V. V.; Möbius, E.; Alexashov, D. B.; Katushkina, O. A.; Kucharek, H.
2017-12-01
Quantitative analysis of the interstellar heavy (oxygen and neon) atom fluxes obtained by the Interstellar Boundary Explorer (IBEX) suggests the existence of the secondary interstellar oxygen component. This component is formed near the heliopause due to charge exchange of interstellar oxygen ions with hydrogen atoms, as was predicted theoretically. A detailed quantitative analysis of the fluxes of interstellar heavy atoms is only possible with a model that takes into account both the filtration of primary and the production of secondary interstellar oxygen in the boundary region of the heliosphere as well as a detailed simulation of the motion of interstellar atoms inside the heliosphere. This simulation must take into account photoionization, charge exchange with the protons of the solar wind and solar gravitational attraction. This paper presents the results of modeling interstellar oxygen and neon atoms through the heliospheric interface and inside the heliosphere based on a three-dimensional kinetic-MHD model of the solar wind interaction with the local interstellar medium and a comparison of these results with the data obtained on the IBEX spacecraft.
Growing complex network of citations of scientific papers: Modeling and measurements
NASA Astrophysics Data System (ADS)
Golosovsky, Michael; Solomon, Sorin
2017-01-01
We consider the network of citations of scientific papers and use a combination of the theoretical and experimental tools to uncover microscopic details of this network growth. Namely, we develop a stochastic model of citation dynamics based on the copying-redirection-triadic closure mechanism. In a complementary and coherent way, the model accounts both for statistics of references of scientific papers and for their citation dynamics. Originating in empirical measurements, the model is cast in such a way that it can be verified quantitatively in every aspect. Such validation is performed by measuring citation dynamics of physics papers. The measurements revealed nonlinear citation dynamics, the nonlinearity being intricately related to network topology. The nonlinearity has far-reaching consequences including nonstationary citation distributions, diverging citation trajectories of similar papers, runaways or "immortal papers" with infinite citation lifetime, etc. Thus nonlinearity in complex network growth is our most important finding. In a more specific context, our results can be a basis for quantitative probabilistic prediction of citation dynamics of individual papers and of the journal impact factor.
1985-11-01
Kappus ,19a5; Tyson and Green, in press). When ethane evolution was quantitated, the experimental conditions were modified to maximize sensitivity, as...commonly used and convenient technique for tVat purpose ( Kappus , 1985). Since MDA, the lipid breakdown product that the TBA reaction primarily...cytcchrome P-450(c) reductase. Mol. Pharmacol. 20, 669-673 (1981). Kappus , H. (1985). Lipid peroxidatioa: mechanisms, analysis, enzymology and
Impact of thermal atomic displacements on the Curie temperature of 3 d transition metals
NASA Astrophysics Data System (ADS)
Ruban, A. V.; Peil, O. E.
2018-05-01
It is demonstrated that thermally induced atomic displacements from ideal lattice positions can produce considerable effect on magnetic exchange interactions and, consequently, on the Curie temperature of Fe. Thermal lattice distortion should, therefore, be accounted for in quantitatively accurate theoretical modeling of the magnetic phase transition. At the same time, this effect seems to be not very important for magnetic exchange interactions and the Curie temperature of Co and Ni.
Jones, Adam G
2008-04-25
Rapid human-induced changes in the environment at local, regional and global scales appear to be contributing to population declines and extinctions, resulting in an unprecedented biodiversity crisis. Although in the short term populations can respond ecologically to environmental alterations, in the face of persistent change populations must evolve or become extinct. Existing models of evolution and extinction in changing environments focus only on single species, even though the dynamics of extinction almost certainly depend upon the nature of species interactions. Here, I use a model of quantitative trait evolution in a two-species community to show that negative ecological interactions, such as predation and competition, can produce unexpected results regarding time to extinction. Under some circumstances, negative interactions can be expected to hasten the extinction of species declining in numbers. However, under other circumstances, negative interactions can actually increase times to extinction. This effect occurs across a wide range of parameter values and can be substantial, in some cases allowing a population to persist for 40 percent longer than it would in the absence of the species interaction. This theoretical study indicates that negative species interactions can have unexpected positive effects on times to extinction. Consequently, detailed studies of selection and demographics will be necessary to predict the consequences of species interactions in changing environments for any particular ecological community.
2015-01-01
An immersion Raman probe was used in emulsion copolymerization reactions to measure monomer concentrations and particle sizes. Quantitative determination of monomer concentrations is feasible in two-monomer copolymerizations, but only the overall conversion could be measured by Raman spectroscopy in a four-monomer copolymerization. The feasibility of measuring monomer conversion and particle size was established using partial least-squares (PLS) calibration models. A simplified theoretical framework for the measurement of particle sizes based on photon scattering is presented, based on the elastic-sphere-vibration and surface-tension models. PMID:26900256
NASA Technical Reports Server (NTRS)
Sud, V. K.; Srinivasan, R. S.; Charles, J. B.; Bungo, M. W.
1992-01-01
This paper reports on a theoretical investigation into the effects of vasomotion on blood through the human cardiovascular system. The finite element method has been used to analyse the model. Vasoconstriction and vasodilation may be effected either through the action of the central nervous system or autoregulation. One of the conditions responsible for vasomotion is exercise. The proposed model has been solved and quantitative results of flows and pressures due to changing the conductances of specific networks of arterioles, capillaries and venules comprising the arms, legs, stomach and their combinations have been obtained.
Symbolic interactionism as a theoretical perspective for multiple method research.
Benzies, K M; Allen, M N
2001-02-01
Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.
Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide
2009-01-01
The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.
Proposal for a quantitative index of flood disasters.
Feng, Lihua; Luo, Gaoyuan
2010-07-01
Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.
NASA Astrophysics Data System (ADS)
Saikia, P.; Bhuyan, H.; Escalona, M.; Favre, M.; Wyndham, E.; Maze, J.; Schulze, J.
2018-01-01
The behavior of a dual frequency capacitively coupled plasma (2f CCP) driven by 2.26 and 13.56 MHz radio frequency (rf) source is investigated using an approach that integrates a theoretical model and experimental data. The basis of the theoretical analysis is a time dependent dual frequency analytical sheath model that casts the relation between the instantaneous sheath potential and plasma parameters. The parameters used in the model are obtained by operating the 2f CCP experiment (2.26 MHz + 13.56 MHz) in argon at a working pressure of 50 mTorr. Experimentally measured plasma parameters such as the electron density, electron temperature, as well as the rf current density ratios are the inputs of the theoretical model. Subsequently, a convenient analytical solution for the output sheath potential and sheath thickness was derived. A comparison of the present numerical results is done with the results obtained in another 2f CCP experiment conducted by Semmler et al (2007 Plasma Sources Sci. Technol. 16 839). A good quantitative correspondence is obtained. The numerical solution shows the variation of sheath potential with the low and high frequency (HF) rf powers. In the low pressure plasma, the sheath potential is a qualitative measure of DC self-bias which in turn determines the ion energy. Thus, using this analytical model, the measured values of the DC self-bias as a function of low and HF rf powers are explained in detail.
Abeijon, Paula; Garcia-Mera, Xerardo; Caamano, Olga; Yanez, Matilde; Lopez-Castro, Edgar; Romero-Duran, Francisco J; Gonzalez-Diaz, Humberto
2017-01-01
Hansch's model is a classic approach to Quantitative Structure-Binding Relationships (QSBR) problems in Pharmacology and Medicinal Chemistry. Hansch QSAR equations are used as input parameters of electronic structure and lipophilicity. In this work, we perform a review on Hansch's analysis. We also developed a new type of PT-QSBR Hansch's model based on Perturbation Theory (PT) and QSBR approach for a large number of drugs reported in CheMBL. The targets are proteins expressed by the Hippocampus region of the brain of Alzheimer Disease (AD) patients. The model predicted correctly 49312 out of 53783 negative perturbations (Specificity = 91.7%) and 16197 out of 21245 positive perturbations (Sensitivity = 76.2%) in training series. The model also predicted correctly 49312/53783 (91.7%) and 16197/21245 (76.2%) negative or positive perturbations in external validation series. We applied our model in theoretical-experimental studies of organic synthesis, pharmacological assay, and prediction of unmeasured results for a series of compounds similar to Rasagiline (compound of reference) with potential neuroprotection effect. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Analytical coupled modeling of a magneto-based acoustic metamaterial harvester
NASA Astrophysics Data System (ADS)
Nguyen, H.; Zhu, R.; Chen, J. K.; Tracy, S. L.; Huang, G. L.
2018-05-01
Membrane-type acoustic metamaterials (MAMs) have demonstrated unusual capacity in controlling low-frequency sound transmission, reflection, and absorption. In this paper, an analytical vibro-acoustic-electromagnetic coupling model is developed to study MAM harvester sound absorption, energy conversion, and energy harvesting behavior under a normal sound incidence. The MAM harvester is composed of a prestressed membrane with an attached rigid mass, a magnet coil, and a permanent magnet coin. To accurately capture finite-dimension rigid mass effects on the membrane deformation under the variable magnet force, a theoretical model based on the deviating acoustic surface Green’s function approach is developed by considering the acoustic near field and distributed effective shear force along the interfacial boundary between the mass and the membrane. The accuracy and capability of the theoretical model is verified through comparison with the finite element method. In particular, sound absorption, acoustic-electric energy conversion, and harvesting coefficient are quantitatively investigated by varying the weight and size of the attached mass, prestress and thickness of the membrane. It is found that the highest achievable conversion and harvesting coefficients can reach up to 48%, and 36%, respectively. The developed model can serve as an efficient tool for designing MAM harvesters.
Schaper, Louise K; Pervan, Graham P
2007-06-01
There is evidence to suggest that health professionals are reluctant to accept and utilise information and communication technologies (ICT) and concern is growing within health informatics research that this is contributing to the lag in adoption and utilisation of ICT across the health sector. Technology acceptance research within the field of information systems has been limited in its application to health and there is a concurrent need to develop and gain empirical support for models of technology acceptance within health and to examine acceptance and utilisation issues amongst health professionals to improve the success of information system implementation in this arena. This paper outlines a project that examines ICT acceptance and utilisation by Australian occupational therapists. It describes the theoretical basis behind the development of a research model and the methodology being employed to empirically validate the model using substantial quantitative, qualitative and longitudinal data. Preliminary results from Phase II of the study are presented. The theoretical significance of this work is that it uses a thoroughly constructed research model, with potentially the largest sample size ever tested, to extend technology acceptance research into the health sector.
An Empirically Calibrated Model of Cell Fate Decision Following Viral Infection
NASA Astrophysics Data System (ADS)
Coleman, Seth; Igoshin, Oleg; Golding, Ido
The life cycle of the virus (phage) lambda is an established paradigm for the way genetic networks drive cell fate decisions. But despite decades of interrogation, we are still unable to theoretically predict whether the infection of a given cell will result in cell death or viral dormancy. The poor predictive power of current models reflects the absence of quantitative experimental data describing the regulatory interactions between different lambda genes. To address this gap, we are constructing a theoretical model that captures the known interactions in the lambda network. Model assumptions and parameters are calibrated using new single-cell data from our lab, describing the activity of lambda genes at single-molecule resolution. We began with a mean-field model, aimed at exploring the population averaged gene-expression trajectories under different initial conditions. Next, we will develop a stochastic formulation, to capture the differences between individual cells within the population. The eventual goal is to identify how the post-infection decision is driven by the interplay between network topology, initial conditions, and stochastic effects. The insights gained here will inform our understanding of cell fate choices in more complex cellular systems.
Electrodynamical Model of Quasi-Efficient Financial Markets
NASA Astrophysics Data System (ADS)
Ilinski, Kirill N.; Stepanenko, Alexander S.
The modelling of financial markets presents a problem which is both theoretically challenging and practically important. The theoretical aspects concern the issue of market efficiency which may even have political implications [1], whilst the practical side of the problem has clear relevance to portfolio management [2] and derivative pricing [3]. Up till now all market models contain "smart money" traders and "noise" traders whose joint activity constitutes the market [4, 5]. On a short time scale this traditional separation does not seem to be realistic, and is hardly acceptable since all high-frequency market participants are professional traders and cannot be separated into "smart" and "noisy." In this paper we present a "microscopic" model with homogenuous quasi-rational behaviour of traders, aiming to describe short time market behaviour. To construct the model we use an analogy between "screening" in quantum electrodynamics and an equilibration process in a market with temporal mispricing [6, 7]. As a result, we obtain the time-dependent distribution function of the returns which is in quantitative agreement with real market data and obeys the anomalous scaling relations recently reported for both high-frequency exchange rates [8], S&P500 [9] and other stock market indices [10, 11].
Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J
2009-01-01
Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753
Geary, David C; vanMarle, Kristy
2016-12-01
At the beginning of preschool (M = 46 months of age), 197 (94 boys) children were administered tasks that assessed a suite of nonsymbolic and symbolic quantitative competencies as well as their executive functions, verbal and nonverbal intelligence, preliteracy skills, and their parents' education level. The children's mathematics achievement was assessed at the end of preschool (M = 64 months). We used a series of Bayesian and standard regression analyses to winnow this broad set of competencies down to the core subset of quantitative skills that predict later mathematics achievement, controlling other factors. This knowledge included children's fluency in reciting the counting string, their understanding of the cardinal value of number words, and recognition of Arabic numerals, as well as their sensitivity to the relative quantity of 2 collections of objects. The results inform theoretical models of the foundations of children's early quantitative development and have practical implications for the design of early interventions for children at risk for poor long-term mathematics achievement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Conducting field studies for testing pesticide leaching models
Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.
1990-01-01
A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.
Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome
NASA Astrophysics Data System (ADS)
Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.
2018-03-01
The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
Theory of nucleosome corkscrew sliding in the presence of synthetic DNA ligands.
Mohammad-Rafiee, Farshid; Kulić, Igor M; Schiessel, Helmut
2004-11-12
Histone octamers show a heat-induced mobility along DNA. Recent theoretical studies have established two mechanisms that are qualitatively and quantitatively compatible with in vitro experiments on nucleosome sliding: octamer repositioning through one-base-pair twist defects and through ten-base-pair bulge defects. A recent experiment demonstrated that the repositioning is strongly suppressed in the presence of minor-groove binding DNA ligands. In the present study, we give a quantitative theory for nucleosome repositioning in the presence of such ligands. We show that the experimentally observed octamer mobilities are consistent with the picture of bound ligands blocking the passage of twist defects through the nucleosome. This strongly supports the model of twist defects inducing a corkscrew motion of the nucleosome as the underlying mechanism of nucleosome sliding. We provide a theoretical estimate of the nucleosomal mobility without adjustable parameters, as a function of ligand concentration, binding affinity, binding site orientation, temperature and DNA anisotropy. Having this mobility in hand, we speculate on the interaction between a nucleosome and a transcribing RNA polymerase, and suggest a novel mechanism that might account for polymerase-induced nucleosome repositioning on short DNA templates.
A strategy for understanding noise-induced annoyance
NASA Astrophysics Data System (ADS)
Fidell, S.; Green, D. M.; Schultz, T. J.; Pearsons, K. S.
1988-08-01
This report provides a rationale for development of a systematic approach to understanding noise-induced annoyance. Two quantitative models are developed to explain: (1) the prevalence of annoyance due to residential exposure to community noise sources; and (2) the intrusiveness of individual noise events. Both models deal explicitly with the probabilistic nature of annoyance, and assign clear roles to acoustic and nonacoustic determinants of annoyance. The former model provides a theoretical foundation for empirical dosage-effect relationships between noise exposure and community response, while the latter model differentiates between the direct and immediate annoyance of noise intrusions and response bias factors that influence the reporting of annoyance. The assumptions of both models are identified, and the nature of the experimentation necessary to test hypotheses derived from the models is described.
Perspectives on Computational Organic Chemistry
Streitwieser, Andrew
2009-01-01
The author reviews how his early love for theoretical organic chemistry led to experimental research and the extended search for quantitative correlations between experiment and quantum calculations. The experimental work led to ion pair acidities of alkali-organic compounds and most recently to equilibria and reactions of lithium and cesium enolates in THF. This chemistry is now being modeled by ab initio calculations. An important consideration is the treatment of solvation in which coordination of the alkali cation with the ether solvent plays a major role. PMID:19518150
Millimeter Continuum Observations Of Disk Solids
NASA Astrophysics Data System (ADS)
Andrews, Sean
2016-07-01
I will offer a condensed overview of some key issues in protoplanetary disk research that makes use interferometric measurements of the millimeter-wavelength continuum emitted by their solid particles. Several lines of evidence now qualitatively support theoretical models for the growth and migration of disk solids, but also advertise a quantitative tension with the traditional efficiency of that evolution. New observations of small-scale substructures in disks might both reconcile the conflict and shift our focus in the mechanics of planet formation.
A satellite technique for quantitatively mapping rainfall rates over the oceans
NASA Technical Reports Server (NTRS)
Wilheit, T. T.; Roa, M. S. V.; Chang, T. C.; Rodgers, E. B.; Theon, J. S.
1975-01-01
A theoretical model for calculating microwave radiative transfer in raining atmospheres is developed. These calculations are compared with microwave brightness temperatures at a wavelength of 1.55 cm measured on the Nimbus-5 satellite and rain rates derived from WSR-57 meteorological radar measurements. A specially designed ground based verification experiment was also performed wherein upward viewing microwave brightness temperature measurements at wavelengths of 1.55 cm and 0.81 cm were compared with directly measured rain rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cantu, David C.; Malhotra, Deepika; Koech, Phillip K.
2016-01-01
CO2 capture from power generation with aqueous solvents remains energy intensive due to the high water content of the current technology, or the high viscosity of non-aqueous alternatives. Quantitative reduced models, connecting molecular structure to bulk properties, are key for developing structure-property relationships that enable molecular design. In this work, we describe such a model that quantitatively predicts viscosities of CO2 binding organic liquids (CO2BOLs) based solely on molecular structure and the amount of bound CO2. The functional form of the model correlates the viscosity with the CO2 loading and an electrostatic term describing the charge distribution between the CO2-bearingmore » functional group and the proton-receiving amine. Molecular simulations identify the proton shuttle between these groups within the same molecule to be the critical indicator of low viscosity. The model, developed to allow for quick screening of solvent libraries, paves the way towards the rational design of low viscosity non-aqueous solvent systems for post-combustion CO2 capture. Following these theoretical recommendations, synthetic efforts of promising candidates and viscosity measurement provide experimental validation and verification.« less
Contact thermal shock test of ceramics
NASA Technical Reports Server (NTRS)
Rogers, W. P.; Emery, A. F.
1992-01-01
A novel quantitative thermal shock test of ceramics is described. The technique employs contact between a metal-cooling rod and hot disk-shaped specimen. In contrast with traditional techniques, the well-defined thermal boundary condition allows for accurate analyses of heat transfer, stress, and fracture. Uniform equibiaxial tensile stresses are induced in the center of the test specimen. Transient specimen temperature and acoustic emission are monitored continuously during the thermal stress cycle. The technique is demonstrated with soda-lime glass specimens. Experimental results are compared with theoretical predictions based on a finite-element method thermal stress analysis combined with a statistical model of fracture. Material strength parameters are determined using concentric ring flexure tests. Good agreement is found between experimental results and theoretical predictions of failure probability as a function of time and initial specimen temperature.
Electronic structure and charge transport in nonstoichiometric tantalum oxide
NASA Astrophysics Data System (ADS)
Perevalov, T. V.; Gritsenko, V. A.; Gismatulin, A. A.; Voronkovskii, V. A.; Gerasimova, A. K.; Aliev, V. Sh; Prosvirin, I. A.
2018-06-01
The atomic and electronic structure of nonstoichiometric oxygen-deficient tantalum oxide TaO x<2.5 grown by ion beam sputtering deposition was studied. The TaO x film content was analyzed by x-ray photoelectron spectroscopy and by quantum-chemistry simulation. TaO x is composed of Ta2O5, metallic tantalum clusters and tantalum suboxides. A method for evaluating the stoichiometry parameter of TaO x from the comparison of experimental and theoretical photoelectron valence band spectra is proposed. The charge transport properties of TaO x were experimentally studied and the transport mechanism was quantitatively analyzed with four theoretical dielectric conductivity models. It was found that the charge transport in almost stoichiometric and nonstoichiometric tantalum oxide can be consistently described by the phonon-assisted tunneling between traps.
Hötzel, Fabian; Seino, Kaori; Huck, Christian; Skibbe, Olaf; Bechstedt, Friedhelm; Pucci, Annemarie
2015-06-10
The metal-atom chains on the Si(111) - 5 × 2 - Au surface represent an exceedingly interesting system for the understanding of one-dimensional electrical interconnects. While other metal-atom chain structures on silicon suffer from metal-to-insulator transitions, Si(111) - 5 × 2 - Au stays metallic at least down to 20 K as we have proven by the anisotropic absorption from localized plasmon polaritons in the infrared. A quantitative analysis of the infrared plasmonic signal done here for the first time yields valuable band structure information in agreement with the theoretically derived data. The experimental and theoretical results are consistently explained in the framework of the atomic geometry, electronic structure, and IR spectra of the recent Kwon-Kang model.
Bidirectional selection between two classes in complex social networks.
Zhou, Bin; He, Zhe; Jiang, Luo-Luo; Wang, Nian-Xin; Wang, Bing-Hong
2014-12-19
The bidirectional selection between two classes widely emerges in various social lives, such as commercial trading and mate choosing. Until now, the discussions on bidirectional selection in structured human society are quite limited. We demonstrated theoretically that the rate of successfully matching is affected greatly by individuals' neighborhoods in social networks, regardless of the type of networks. Furthermore, it is found that the high average degree of networks contributes to increasing rates of successful matches. The matching performance in different types of networks has been quantitatively investigated, revealing that the small-world networks reinforces the matching rate more than scale-free networks at given average degree. In addition, our analysis is consistent with the modeling result, which provides the theoretical understanding of underlying mechanisms of matching in complex networks.
Design rules for biomolecular adhesion: lessons from force measurements.
Leckband, Deborah
2010-01-01
Cell adhesion to matrix, other cells, or pathogens plays a pivotal role in many processes in biomolecular engineering. Early macroscopic methods of quantifying adhesion led to the development of quantitative models of cell adhesion and migration. The more recent use of sensitive probes to quantify the forces that alter or manipulate adhesion proteins has revealed much greater functional diversity than was apparent from population average measurements of cell adhesion. This review highlights theoretical and experimental methods that identified force-dependent molecular properties that are central to the biological activity of adhesion proteins. Experimental and theoretical methods emphasized in this review include the surface force apparatus, atomic force microscopy, and vesicle-based probes. Specific examples given illustrate how these tools have revealed unique properties of adhesion proteins and their structural origins.
Hu, Li; Tian, Xiaorui; Huang, Yingzhou; Fang, Liang; Fang, Yurui
2016-02-14
Plasmonic chirality has drawn much attention because of tunable circular dichroism (CD) and the enhancement for chiral molecule signals. Although various mechanisms have been proposed to explain the plasmonic CD, a quantitative explanation like the ab initio mechanism for chiral molecules, is still unavailable. In this study, a mechanism similar to the mechanisms associated with chiral molecules was analyzed. The giant extrinsic circular dichroism of a plasmonic splitting rectangle ring was quantitatively investigated from a theoretical standpoint. The interplay of the electric and magnetic modes of the meta-structure is proposed to explain the giant CD. We analyzed the interplay using both an analytical coupled electric-magnetic dipole model and a finite element method model. The surface charge distributions showed that the circular current yielded by the splitting rectangle ring causes the ring to behave like a magneton at some resonant modes, which then interact with the electric modes, resulting in a mixing of the two types of modes. The strong interplay of the two mode types is primarily responsible for the giant CD. The analysis of the chiral near-field of the structure shows potential applications for chiral molecule sensing.
Genetic basis of adaptation in Arabidopsis thaliana: local adaptation at the seed dormancy QTL DOG1.
Kronholm, Ilkka; Picó, F Xavier; Alonso-Blanco, Carlos; Goudet, Jérôme; de Meaux, Juliette
2012-07-01
Local adaptation provides an opportunity to study the genetic basis of adaptation and investigate the allelic architecture of adaptive genes. We study delay of germination 1 (DOG1), a gene controlling natural variation in seed dormancy in Arabidopsis thaliana and investigate evolution of dormancy in 41 populations distributed in four regions separated by natural barriers. Using F(ST) and Q(ST) comparisons, we compare variation at DOG1 with neutral markers and quantitative variation in seed dormancy. Patterns of genetic differentiation among populations suggest that the gene DOG1 contributes to local adaptation. Although Q(ST) for seed dormancy is not different from F(ST) for neutral markers, a correlation with variation in summer precipitation supports that seed dormancy is adaptive. We characterize dormancy variation in several F(2) -populations and show that a series of functionally distinct alleles segregate at the DOG1 locus. Theoretical models have shown that the number and effect of alleles segregatin at quantitative trait loci (QTL) have important consequences for adaptation. Our results provide support to models postulating a large number of alleles at quantitative trait loci involved in adaptation. © 2012 The Author(s).
Polymorphism and Elastic Response of Molecular Materials from First Principles: How Hard Can it Be?
NASA Astrophysics Data System (ADS)
Reilly, Anthony; Tkatchenko, Alexandre
2014-03-01
Molecular materials are of great fundamental and applied importance in science and industry, with numerous applications in pharmaceuticals, electronics, sensing, and catalysis. A key challenge for theory has been the prediction of their stability, polymorphism and response to perturbations. While pairwise models of van der Waals (vdW) interactions have improved the ability of density functional theory (DFT) to model these systems, substantial quantitative and even qualitative failures remain. In this contribution we show how a many-body description of vdW interactions can dramatically improve the accuracy of DFT for molecular materials, yielding quantitative description of stabilities and polymorphism for these challenging systems. Moreover, the role of many-body vdW interactions goes beyond stabilities to response properties. In particular, we have studied the elastic properties of a series of molecular crystals, finding that many-body vdW interactions can account for up to 30% of the elastic response, leading to quantitative and qualitative changes in elastic behavior. We will illustrate these crucial effects with the challenging case of the polymorphs of aspirin, leading to a better understanding of the conflicting experimental and theoretical studies of this system.
Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm
NASA Technical Reports Server (NTRS)
Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.
1982-01-01
Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.
Solidification kinetics of a Cu-Zr alloy: ground-based and microgravity experiments
NASA Astrophysics Data System (ADS)
Galenko, P. K.; Hanke, R.; Paul, P.; Koch, S.; Rettenmayr, M.; Gegner, J.; Herlach, D. M.; Dreier, W.; Kharanzhevski, E. V.
2017-04-01
Experimental and theoretical results obtained in the MULTIPHAS-project (ESA-European Space Agency and DLR-German Aerospace Center) are critically discussed regarding solidification kinetics of congruently melting and glass forming Cu50Zr50 alloy samples. The samples are investigated during solidification using a containerless technique in the Electromagnetic Levitation Facility [1]. Applying elaborated methodologies for ground-based and microgravity experimental investigations [2], the kinetics of primary dendritic solidification is quantitatively evaluated. Electromagnetic Levitator in microgravity (parabolic flights and on board of the International Space Station) and Electrostatic Levitator on Ground are employed. The solidification kinetics is determined using a high-speed camera and applying two evaluation methods: “Frame by Frame” (FFM) and “First Frame - Last Frame” (FLM). In the theoretical interpretation of the solidification experiments, special attention is given to the behavior of the cluster structure in Cu50Zr50 samples with the increase of undercooling. Experimental results on solidification kinetics are interpreted using a theoretical model of diffusion controlled dendrite growth.
One-step model of photoemission from single-crystal surfaces
Karkare, Siddharth; Wan, Weishi; Feng, Jun; ...
2017-02-28
In our paper, we present a three-dimensional one-step photoemission model that can be used to calculate the quantum efficiency and momentum distributions of electrons photoemitted from ordered single-crystal surfaces close to the photoemission threshold. Using Ag(111) as an example, we also show that the model can not only calculate the quantum efficiency from the surface state accurately without using any ad hoc parameters, but also provides a theoretical quantitative explanation of the vectorial photoelectric effect. This model in conjunction with other band structure and wave function calculation techniques can be effectively used to screen single-crystal photoemitters for use as electronmore » sources for particle accelerator and ultrafast electron diffraction applications.« less
A mathematical model for CTL effect on a latently infected cell inclusive HIV dynamics and treatment
NASA Astrophysics Data System (ADS)
Tarfulea, N. E.
2017-10-01
This paper investigates theoretically and numerically the effect of immune effectors, such as the cytotoxic lymphocyte (CTL), in modeling HIV pathogenesis (via a newly developed mathematical model); our results suggest the significant impact of the immune response on the control of the virus during primary infection. Qualitative aspects (including positivity, boundedness, stability, uncertainty, and sensitivity analysis) are addressed. Additionally, by introducing drug therapy, we analyze numerically the model to assess the effect of treatment consisting of a combination of several antiretroviral drugs. Our results show that the inclusion of the CTL compartment produces a higher rebound for an individual's healthy helper T-cell compartment than drug therapy alone. Furthermore, we quantitatively characterize successful drugs or drug combination scenarios.
Rejniak, Katarzyna A.; Gerlee, Philip
2013-01-01
Summary In this review we summarize our recent efforts using mathematical modeling and computation to simulate cancer invasion, with a special emphasis on the tumor microenvironment. We consider cancer progression as a complex multiscale process and approach it with three single-cell based mathematical models that examine the interactions between tumor microenvironment and cancer cells at several scales. The models exploit distinct mathematical and computational techniques, yet they share core elements and can be compared and/or related to each other. The overall aim of using mathematical models is to uncover the fundamental mechanisms that lend cancer progression its direction towards invasion and metastasis. The models effectively simulate various modes of cancer cell adaptation to the microenvironment in a growing tumor. All three point to a general mechanism underlying cancer invasion: competition for adaptation between distinct cancer cell phenotypes, driven by a tumor microenvironment with scarce resources. These theoretical predictions pose an intriguing experimental challenge: test the hypothesis that invasion is an emergent property of cancer cell populations adapting to selective microenvironment pressure, rather than culmination of cancer progression producing cells with the “invasive phenotype”. In broader terms, we propose that fundamental insights into cancer can be achieved by experimentation interacting with theoretical frameworks provided by computational and mathematical modeling. PMID:18524624
NASA Astrophysics Data System (ADS)
Ivanova, Bojidarka; Spiteller, Michael
2017-12-01
The present paper deals with quantitative kinetics and thermodynamics of collision induced dissociation (CID) reactions of piperazines under different experimental conditions together with a systematic description of effect of counter-ions on common MS fragment reactions of piperazines; and intra-molecular effect of quaternary cyclization of substituted piperazines yielding to quaternary salts. There are discussed quantitative model equations of rate constants as well as free Gibbs energies of series of m-independent CID fragment processes in GP, which have been evidenced experimentally. Both kinetic and thermodynamic parameters are also predicted by computational density functional theory (DFT) and ab initio both static and dynamic methods. The paper examines validity of Maxwell-Boltzmann distribution to non-Boltzmann CID processes in quantitatively as well. The experiments conducted within the latter framework yield to an excellent correspondence with theoretical quantum chemical modeling. The important property of presented model equations of reaction kinetics is the applicability in predicting unknown and assigning of known mass spectrometric (MS) patterns. The nature of "GP" continuum of CID-MS coupled scheme of measurements with electrospray ionization (ESI) source is discussed, performing parallel computations in gas-phase (GP) and polar continuum at different temperatures and ionic strengths. The effect of pressure is presented. The study contributes significantly to methodological and phenomenological developments of CID-MS and its analytical implementations for quantitative and structural analyses. It also demonstrates great prospective of a complementary application of experimental CID-MS and computational quantum chemistry studying chemical reactivity, among others. To a considerable extend this work underlies the place of computational quantum chemistry to the field of experimental analytical chemistry in particular highlighting the structural analysis.
Collective behavior in animal groups: theoretical models and empirical studies
Giardina, Irene
2008-01-01
Collective phenomena in animal groups have attracted much attention in the last years, becoming one of the hottest topics in ethology. There are various reasons for this. On the one hand, animal grouping provides a paradigmatic example of self-organization, where collective behavior emerges in absence of centralized control. The mechanism of group formation, where local rules for the individuals lead to a coherent global state, is very general and transcends the detailed nature of its components. In this respect, collective animal behavior is a subject of great interdisciplinary interest. On the other hand, there are several important issues related to the biological function of grouping and its evolutionary success. Research in this field boasts a number of theoretical models, but much less empirical results to compare with. For this reason, even if the general mechanisms through which self-organization is achieved are qualitatively well understood, a quantitative test of the models assumptions is still lacking. New analysis on large groups, which require sophisticated technological procedures, can provide the necessary empirical data. PMID:19404431
Johnston, Iain G; Burgstaller, Joerg P; Havlicek, Vitezslav; Kolbe, Thomas; Rülicke, Thomas; Brem, Gottfried; Poulton, Jo; Jones, Nick S
2015-01-01
Dangerous damage to mitochondrial DNA (mtDNA) can be ameliorated during mammalian development through a highly debated mechanism called the mtDNA bottleneck. Uncertainty surrounding this process limits our ability to address inherited mtDNA diseases. We produce a new, physically motivated, generalisable theoretical model for mtDNA populations during development, allowing the first statistical comparison of proposed bottleneck mechanisms. Using approximate Bayesian computation and mouse data, we find most statistical support for a combination of binomial partitioning of mtDNAs at cell divisions and random mtDNA turnover, meaning that the debated exact magnitude of mtDNA copy number depletion is flexible. New experimental measurements from a wild-derived mtDNA pairing in mice confirm the theoretical predictions of this model. We analytically solve a mathematical description of this mechanism, computing probabilities of mtDNA disease onset, efficacy of clinical sampling strategies, and effects of potential dynamic interventions, thus developing a quantitative and experimentally-supported stochastic theory of the bottleneck. DOI: http://dx.doi.org/10.7554/eLife.07464.001 PMID:26035426
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, X; Arbique, G; Guild, J
Purpose: To evaluate the quantitative image quality of spectral reconstructions of phantom data from a spectral CT scanner. Methods: The spectral CT scanner (IQon Spectral CT, Philips Healthcare) is equipped with a dual-layer detector and generates conventional 80-140 kVp images and variety of spectral reconstructions, e.g., virtual monochromatic (VM) images, virtual non-contrast (VNC) images, iodine maps, and effective atomic number (Z) images. A cylindrical solid water phantom (Gammex 472, 33 cm diameter and 5 cm thick) with iodine (2.0-20.0 mg I/ml) and calcium (50-600 mg/ml) rod inserts was scanned at 120 kVp and 27 mGy CTDIvol. Spectral reconstructions were evaluatedmore » by comparing image measurements with theoretical values calculated from nominal rod compositions provided by the phantom manufacturer. The theoretical VNC was calculated using water and iodine basis material decomposition, and the theoretical Z was calculated using two common methods, the chemical formula method (Z1) and the dual-energy ratio method (Z2). Results: Beam-hardening-like artifacts between high-attenuation calcium rods (≥300 mg/ml, >800 HU) influenced quantitative measurements, so the quantitative analysis was only performed on iodine rods using the images from the scan with all the calcium rods removed. The CT numbers of the iodine rods in the VM images (50∼150 keV) were close to theoretical values with average difference of 2.4±6.9 HU. Compared with theoretical values, the average difference for iodine concentration, VNC CT number and effective Z of iodine rods were −0.10±0.38 mg/ml, −0.1±8.2 HU, 0.25±0.06 (Z1) and −0.23±0.07 (Z2). Conclusion: The results indicate that the spectral CT scanner generates quantitatively accurate spectral reconstructions at clinically relevant iodine concentrations. Beam-hardening-like artifacts still exist when high-attenuation objects are present and their impact on patient images needs further investigation. YY is an employee of Philips Healthcare.« less
Theoretical Study on Stress Sensitivity of Fractal Porous Media with Irreducible Water
NASA Astrophysics Data System (ADS)
Lei, Gang; Dong, Zhenzhen; Li, Weirong; Wen, Qingzhi; Wang, Cai
The couple flow deformation behavior in porous media has drawn tremendous attention in various scientific and engineering fields. However, though the coupled flow deformation mechanism has been intensively investigated in the last decades, the essential controls on stress sensitivity are not determined. It is of practical significance to use analytic methods to study stress sensitivity of porous media. Unfortunately, because of the disordered and extremely complicated microstructures of porous media, the theoretical model for stress sensitivity is scarce. The goal of this work is to establish a novel and reasonable quantitative model to determine the essential controls on stress sensitivity. The predictions of the theoretical model, derived from the Hertzian contact theory and fractal geometry, agree well with the available experimental data. Compared with the previous models, our model takes into account more factors, including the influence of the water saturation and the microstructural parameters of the pore space. The proposed models can reveal more mechanisms that affect the coupled flow deformation behavior in fractal porous media. The results show that the irreducible water saturation increases with the increase of effective stress, and decreases with the increased rock elastic modulus (or increased power law index) at a given effective stress. The effect of stress variation on porosity is smaller than that on permeability. Under a given effective stress, the normalized permeability (or the normalized porosity) becomes smaller with the decrease of rock elastic modulus (or the decrease of power law index). And a lower capillary pressure will correspond to an increased rock elastic modulus (or an increased power law index) under a given water saturation.
Regulating Availability: How Access to Alcohol Affects Drinking and Problems in Youth and Adults
Gruenewald, Paul J.
2011-01-01
Regulations on the availability of alcohol have been used to moderate alcohol problems in communities throughout the world for thousands of years. In the latter half of the 20th century, quantitative studies of the effects of these regulations on drinking and related problems began in earnest as public health practitioners began to recognize the full extent of the harmful consequences related to drinking. This article briefly outlines the history of this work over four areas, focusing on the minimum legal drinking age, the privatization of alcohol control systems, outlet densities, and hours and days of sale. Some historical background is provided to emphasize the theoretical and empirical roots of this work and to highlight the substantial progress that has been made in each area. In general, this assessment suggests that higher minimum legal drinking ages, greater monopoly controls over alcohol sales, lower outlet numbers and reduced outlet densities, and limited hours and days of sale can effectively reduce alcohol sales, use, and problems. There are, however, substantial gaps in the research literature and a near absence of the quantitative theoretical work needed to direct alcohol-control efforts. Local community responses to alcohol policies are complex and heterogeneous, sometimes reinforcing and sometimes mitigating the effects of availability regulations. Quantitative models of policy effects are essential to accelerate progress toward the formulation and testing of optimal control strategies for the reduction of alcohol problems. PMID:22330225
Cycling Empirical Antibiotic Therapy in Hospitals: Meta-Analysis and Models
Abel, Sören; Viechtbauer, Wolfgang; Bonhoeffer, Sebastian
2014-01-01
The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling). Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43–0.48] and resistant infections by 7.2 [14.00–0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing). We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call “adjustable cycling/mixing”. In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that “adjustable cycling” is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that “adjustable cycling” suppresses multiple resistance and warrants further investigations that allow comparing various diseases and hospital settings. PMID:24968123
Palaeostress perturbations near the El Castillo de las Guardas fault (SW Iberian Massif)
NASA Astrophysics Data System (ADS)
García-Navarro, Encarnación; Fernández, Carlos
2010-05-01
Use of stress inversion methods on faults measured at 33 sites located at the northwestern part of the South Portuguese Zone (Variscan Iberian Massif), and analysis of the basic dyke attitude at this same region, has revealed a prominent perturbation of the stress trajectories around some large, crustal-scale faults, like the El Castillo de las Guardas fault. The results are compared with the predictions of theoretical models of palaeostress deviations near master faults. According to this comparison, the El Castillo de las Guardas fault, an old structure that probably reversed several times its slip sense, can be considered as a sinistral strike-slip fault during the Moscovian. These results also point out the main shortcomings that still hinder a rigorous quantitative use of the theoretical models of stress perturbations around major faults: the spatial variation in the parameters governing the brittle behaviour of the continental crust, and the possibility of oblique slip along outcrop-scale faults in regions subjected to general, non-plane strain.
NASA Astrophysics Data System (ADS)
Ivanova, Bojidarka; Spiteller, Michael
2018-04-01
The problematic that we consider in this paper treats the quantitative correlation model equations between experimental kinetic and thermodynamic parameters of coupled electrospray ionization (ESI) mass spectrometry (MS) or atmospheric pressure chemical ionization (APCI) mass spectrometry with collision induced dissociation mass spectrometry, accounting for the fact that the physical phenomena and mechanisms of ESI- and APCI-ion formation are completely different. There are described forty two fragment reactions of three analytes under independent ESI- and APCI-measurements. The developed new quantitative models allow us to study correlatively the reaction kinetics and thermodynamics using the methods of mass spectrometry, which complementary application with the methods of the quantum chemistry provide 3D structural information of the analytes. Both static and dynamic quantum chemical computations are carried out. The object of analyses are [2,3-dimethyl-4-(4-methyl-benzoyl)-2,3-di-p-tolyl-cyclobutyl]-p-tolyl-methanone (1) and the polycyclic aromatic hydrocarbons derivatives of dibenzoperylen (2) and tetrabenzo [a,c,fg,op]naphthacene (3), respectively. As far as (1) is known to be a product of [2π+2π] cycloaddition reactions of chalcone (1,3-di-p-tolyl-propenone), however producing cyclic derivatives with different stereo selectivity, so that the study provide crucial data about the capability of mass spectrometry to provide determine the stereo selectivity of the analytes. This work also first provides quantitative treatment of the relations '3D molecular/electronic structures'-'quantum chemical diffusion coefficient'-'mass spectrometric diffusion coefficient', thus extending the capability of the mass spectrometry for determination of the exact 3D structure of the analytes using independent measurements and computations of the diffusion coefficients. The determination of the experimental diffusion parameters is carried out within the 'current monitoring method' evaluating the translation diffusion of charged analytes, while the theoretical modelling of MS ions and computations of theoretical diffusion coefficients are based on the Arrhenius type behavior of the charged species under ESI- and APCI-conditions. Although the study provide certain sound considerations for the quantitative relations between the reaction kinetic-thermodynamics and 3D structure of the analytes together with correlations between 3D molecular/electronic structures-quantum chemical diffusion coefficient-mass spectrometric diffusion coefficient, which contribute significantly to the structural analytical chemistry, the results have importance to other areas such as organic synthesis and catalysis as well.
NASA Astrophysics Data System (ADS)
Chouinard, Christopher D.; Cruzeiro, Vinícius Wilian D.; Beekman, Christopher R.; Roitberg, Adrian E.; Yost, Richard A.
2017-08-01
Drift tube ion mobility coupled with mass spectrometry was used to investigate the gas-phase structure of 25-hydroxyvitamin D3 (25OHD3) and D2 (25OHD2) epimers, and to evaluate its potential in rapid separation of these compounds. Experimental results revealed two distinct drift species for the 25OHD3 sodiated monomer, whereas only one of these conformations was observed for its epimer (epi25OHD3). The unique species allowed 25OHD3 to be readily distinguished, and the same pattern was observed for 25OHD2 epimers. Theoretical modeling of 25OHD3 epimers identified energetically stable gas-phase structures, indicating that both compounds may adopt a compact "closed" conformation, but that 25OHD3 may also adopt a slightly less energetically favorable "open" conformation that is not accessible to its epimer. Calculated theoretical collision cross-sections for these structures agreed with experimental results to <2%. Experimentation indicated that additional energy in the ESI source (i.e., increased temperature, spray voltage) affected the ratio of 25OHD3 conformations, with the less energetically favorable "open" conformation increasing in relative intensity. Finally, LC-IM-MS results yielded linear quantitation of 25OHD3, in the presence of the epimer interference, at biologically relevant concentrations. This study demonstrates that ion mobility can be used in tandem with theoretical modeling to determine structural differences that contribute to drift separation. These separation capabilities provide potential for rapid (<60 ms) identification of 25OHD3 and 25OHD2 in mixtures with their epimers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1980-09-01
The application of reflective Nomarski differential interference contrast microscopy for the determination of quantitative sample topography data is presented. The discussion includes a review of key theoretical results presented previously plus the experimental implementation of the concepts using a commercial Momarski microscope. The experimental work included the modification and characterization of a commercial microscope to allow its use for obtaining quantitative sample topography data. System usage for the measurement of slopes on flat planar samples is also discussed. The discussion has been designed to provide the theoretical basis, a physical insight, and a cookbook procedure for implementation to allow thesemore » results to be of value to both those interested in the microscope theory and its practical usage in the metallography laboratory.« less
Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A
2012-02-28
A mathematical modeling of controlled release of drug from one-layer torus-shaped devices is presented. Analytical solutions based on Refined Integral Method (RIM) are derived. The validity and utility of the model are ascertained by comparison of the simulation results with matrix-type vaginal rings experimental release data reported in the literature. For the comparisons, the pair-wise procedure is used to measure quantitatively the fit of the theoretical predictions to the experimental data. A good agreement between the model prediction and the experimental data is observed. A comparison with a previously reported model is also presented. More accurate results are achieved for small A/C(s) ratios. Copyright © 2011 Elsevier B.V. All rights reserved.
Hierarchy of models: From qualitative to quantitative analysis of circadian rhythms in cyanobacteria
NASA Astrophysics Data System (ADS)
Chaves, M.; Preto, M.
2013-06-01
A hierarchy of models, ranging from high to lower levels of abstraction, is proposed to construct "minimal" but predictive and explanatory models of biological systems. Three hierarchical levels will be considered: Boolean networks, piecewise affine differential (PWA) equations, and a class of continuous, ordinary, differential equations' models derived from the PWA model. This hierarchy provides different levels of approximation of the biological system and, crucially, allows the use of theoretical tools to more exactly analyze and understand the mechanisms of the system. The Kai ABC oscillator, which is at the core of the cyanobacterial circadian rhythm, is analyzed as a case study, showing how several fundamental properties—order of oscillations, synchronization when mixing oscillating samples, structural robustness, and entrainment by external cues—can be obtained from basic mechanisms.
ULTRASONIC STUDIES OF THE FUNDAMENTAL MECHANISMS OF RECRYSTALLIZATION AND SINTERING OF METALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
TURNER, JOSEPH A.
2005-11-30
The purpose of this project was to develop a fundamental understanding of the interaction of an ultrasonic wave with complex media, with specific emphases on recrystallization and sintering of metals. A combined analytical, numerical, and experimental research program was implemented. Theoretical models of elastic wave propagation through these complex materials were developed using stochastic wave field techniques. The numerical simulations focused on finite element wave propagation solutions through complex media. The experimental efforts were focused on corroboration of the models developed and on the development of new experimental techniques. The analytical and numerical research allows the experimental results to bemore » interpreted quantitatively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumilin, V. P.; Shumilin, A. V.; Shumilin, N. V., E-mail: vladimirshumilin@yahoo.com
2015-11-15
The paper is devoted to comparison of experimental data with theoretical predictions concerning the dependence of the current of accelerated ions on the operating voltage of a Hall thruster with an anode layer. The error made in the paper published by the authors in Plasma Phys. Rep. 40, 229 (2014) occurred because of a misprint in the Encyclopedia of Low-Temperature Plasma. In the present paper, this error is corrected. It is shown that the simple model proposed in the above-mentioned paper is in qualitative and quantitative agreement with experimental results.
Cell size control and homeostasis in bacteria
NASA Astrophysics Data System (ADS)
Bradde, Serena; Taheri, Sattar; Sauls, John; Hill, Nobert; Levine, Petra; Paulsson, Johan; Vergassola, Massimo; Jun, Suckjoon
2015-03-01
How cells control their size is a fundamental question in biology. The mechanisms for sensing size, time, or a combination of the two are not supported by experimental evidence. By analysing distributions of size at division at birth and generation time of hundreds of thousands of Gram-negative E. coli and Gram-positive B. subtilis cells under a wide range of tightly controlled steady-state growth conditions, we are now in the position to validate different theoretical models. In this talk I will present all possible models in details and present a general mechanism that quantitatively explains all measurable aspects of growth and cell division at both population and single-cell levels.
Evaluation of synthetic linear motor-molecule actuation energetics
Brough, Branden; Northrop, Brian H.; Schmidt, Jacob J.; Tseng, Hsian-Rong; Houk, Kendall N.; Stoddart, J. Fraser; Ho, Chih-Ming
2006-01-01
By applying atomic force microscope (AFM)-based force spectroscopy together with computational modeling in the form of molecular force-field simulations, we have determined quantitatively the actuation energetics of a synthetic motor-molecule. This multidisciplinary approach was performed on specifically designed, bistable, redox-controllable [2]rotaxanes to probe the steric and electrostatic interactions that dictate their mechanical switching at the single-molecule level. The fusion of experimental force spectroscopy and theoretical computational modeling has revealed that the repulsive electrostatic interaction, which is responsible for the molecular actuation, is as high as 65 kcal·mol−1, a result that is supported by ab initio calculations. PMID:16735470
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Reed, Frances M; Fitzgerald, Les; Rae, Melanie
2016-01-01
To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.
Giraud, Nicolas; Blackledge, Martin; Goldman, Maurice; Böckmann, Anja; Lesage, Anne; Penin, François; Emsley, Lyndon
2005-12-28
A detailed analysis of nitrogen-15 longitudinal relaxation times in microcrystalline proteins is presented. A theoretical model to quantitatively interpret relaxation times is developed in terms of motional amplitude and characteristic time scale. Different averaging schemes are examined in order to propose an analysis of relaxation curves that takes into account the specificity of MAS experiments. In particular, it is shown that magic angle spinning averages the relaxation rate experienced by a single spin over one rotor period, resulting in individual relaxation curves that are dependent on the orientation of their corresponding carousel with respect to the rotor axis. Powder averaging thus leads to a nonexponential behavior in the observed decay curves. We extract dynamic information from experimental decay curves, using a diffusion in a cone model. We apply this study to the analysis of spin-lattice relaxation rates of the microcrystalline protein Crh at two different fields and determine differential dynamic parameters for several residues in the protein.
Rapid climate change and the rate of adaptation: insight from experimental quantitative genetics.
Shaw, Ruth G; Etterson, Julie R
2012-09-01
Evolution proceeds unceasingly in all biological populations. It is clear that climate-driven evolution has molded plants in deep time and within extant populations. However, it is less certain whether adaptive evolution can proceed sufficiently rapidly to maintain the fitness and demographic stability of populations subjected to exceptionally rapid contemporary climate change. Here, we consider this question, drawing on current evidence on the rate of plant range shifts and the potential for an adaptive evolutionary response. We emphasize advances in understanding based on theoretical studies that model interacting evolutionary processes, and we provide an overview of quantitative genetic approaches that can parameterize these models to provide more meaningful predictions of the dynamic interplay between genetics, demography and evolution. We outline further research that can clarify both the adaptive potential of plant populations as climate continues to change and the role played by ongoing adaptation in their persistence. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
Measuring Aircraft Capability for Military and Political Analysis
1976-03-01
challenged in 1932 when a panel of distinguished British scientists discussed the feasibility of quantitatively estimating sensory events... Quantitative Analysis of Social Problems , E.R. Tufte (ed.), p. 407, Addison-Wesley, 1970. 17 "artificial" boundaries are imposed on the data. Less...of arms transfers in various parts of the world as well. Quantitative research (and hence measurement) contributes to theoretical development by
ERIC Educational Resources Information Center
Petty, John T.
1995-01-01
Describes an experiment that uses air to test Charles' law. Reinforces the student's intuitive feel for Charles' law with quantitative numbers they can see, introduces the idea of extrapolating experimental data to obtain a theoretical value, and gives a physical quantitative meaning to the concept of absolute zero. (JRH)
Zhang, Tisheng; Niu, Xiaoji; Ban, Yalong; Zhang, Hongping; Shi, Chuang; Liu, Jingnan
2015-01-01
A GNSS/INS deeply-coupled system can improve the satellite signals tracking performance by INS aiding tracking loops under dynamics. However, there was no literature available on the complete modeling of the INS branch in the INS-aided tracking loop, which caused the lack of a theoretical tool to guide the selections of inertial sensors, parameter optimization and quantitative analysis of INS-aided PLLs. This paper makes an effort on the INS branch in modeling and parameter optimization of phase-locked loops (PLLs) based on the scalar-based GNSS/INS deeply-coupled system. It establishes the transfer function between all known error sources and the PLL tracking error, which can be used to quantitatively evaluate the candidate inertial measurement unit (IMU) affecting the carrier phase tracking error. Based on that, a steady-state error model is proposed to design INS-aided PLLs and to analyze their tracking performance. Based on the modeling and error analysis, an integrated deeply-coupled hardware prototype is developed, with the optimization of the aiding information. Finally, the performance of the INS-aided PLLs designed based on the proposed steady-state error model is evaluated through the simulation and road tests of the hardware prototype. PMID:25569751
Holmes, Tyson H.; Lewis, David B.
2014-01-01
Bayesian estimation techniques offer a systematic and quantitative approach for synthesizing data drawn from the literature to model immunological systems. As detailed here, the practitioner begins with a theoretical model and then sequentially draws information from source data sets and/or published findings to inform estimation of model parameters. Options are available to weigh these various sources of information differentially per objective measures of their corresponding scientific strengths. This approach is illustrated in depth through a carefully worked example for a model of decline in T-cell receptor excision circle content of peripheral T cells during development and aging. Estimates from this model indicate that 21 years of age is plausible for the developmental timing of mean age of onset of decline in T-cell receptor excision circle content of peripheral T cells. PMID:25179832
Humpback whale bioacoustics: From form to function
NASA Astrophysics Data System (ADS)
Mercado, Eduardo, III
This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.
The Causal Effects of Cultural Relevance: Evidence from an Ethnic Studies Curriculum
ERIC Educational Resources Information Center
Dee, Thomas S.; Penner, Emily K.
2017-01-01
An extensive theoretical and qualitative literature stresses the promise of instructional practices and content aligned with minority students' experiences. Ethnic studies courses provide an example of such "culturally relevant pedagogy" (CRP). Despite theoretical support, quantitative evidence on the effectiveness of these courses is…
NASA Astrophysics Data System (ADS)
Krasin, V. P.; Soyustova, S. I.
2018-07-01
Along with other liquid metals liquid lithium-tin alloys can be considered as an alternative to the use of solid plasma facing components of a future fusion reactor. Therefore, parameters characterizing both the ability to retain hydrogen isotopes and those that determine the extraction of tritium from a liquid metal can be of particular importance. Theoretical correlations based on the coordination cluster model have been used to obtain Sieverts' constants for solutions of hydrogen in liquid Li-Sn alloys. The results of theoretical computations are compared with the previously published experimental values for two alloys of the Li-Sn system. The Butler equation in combination with the equations describing the thermodynamic potentials of a binary solution is used to calculate the surface composition and surface tension of liquid Li-Sn alloys.
Xiao, Huapan; Chen, Zhi; Wang, Hairong; Wang, Jiuhong; Zhu, Nan
2018-02-19
Based on micro-indentation mechanics and kinematics of grinding processes, theoretical formulas are deduced to calculate surface roughness (SR) and subsurface damage (SSD) depth. The SRs and SSD depths of a series of fused silica samples, which are prepared under different grinding parameters, are measured. By experimental and theoretical analysis, the relationship between SR and SSD depth is discussed. The effect of grinding parameters on SR and SSD depth is investigated quantitatively. The results show that SR and SSD depth decrease with the increase of wheel speed or the decrease of feed speed as well as cutting depth. The interaction effect between wheel speed and feed speed should be emphasized greatly. Furthermore, a relationship model between SSD depth and grinding parameters is established, which could be employed to evaluate SSD depth efficiently.
Jannasch, Anita; Mahamdeh, Mohammed; Schäffer, Erik
2011-11-25
The random thermal force acting on Brownian particles is often approximated in Langevin models by a "white-noise" process. However, fluid entrainment results in a frequency dependence of this thermal force giving it a "color." While theoretically well understood, direct experimental evidence for this colored nature of the noise term and how it is influenced by a nearby wall is lacking. Here, we directly measured the color of the thermal noise intensity by tracking a particle strongly confined in an ultrastable optical trap. All our measurements are in quantitative agreement with the theoretical predictions. Since Brownian motion is important for microscopic, in particular, biological systems, the colored nature of the noise and its distance dependence to nearby objects need to be accounted for and may even be utilized for advanced sensor applications.
Quantitative biology: where modern biology meets physical sciences.
Shekhar, Shashank; Zhu, Lian; Mazutis, Linas; Sgro, Allyson E; Fai, Thomas G; Podolski, Marija
2014-11-05
Quantitative methods and approaches have been playing an increasingly important role in cell biology in recent years. They involve making accurate measurements to test a predefined hypothesis in order to compare experimental data with predictions generated by theoretical models, an approach that has benefited physicists for decades. Building quantitative models in experimental biology not only has led to discoveries of counterintuitive phenomena but has also opened up novel research directions. To make the biological sciences more quantitative, we believe a two-pronged approach needs to be taken. First, graduate training needs to be revamped to ensure biology students are adequately trained in physical and mathematical sciences and vice versa. Second, students of both the biological and the physical sciences need to be provided adequate opportunities for hands-on engagement with the methods and approaches necessary to be able to work at the intersection of the biological and physical sciences. We present the annual Physiology Course organized at the Marine Biological Laboratory (Woods Hole, MA) as a case study for a hands-on training program that gives young scientists the opportunity not only to acquire the tools of quantitative biology but also to develop the necessary thought processes that will enable them to bridge the gap between these disciplines. © 2014 Shekhar, Zhu, Mazutis, Sgro, Fai, and Podolski. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Using representations in geometry: a model of students' cognitive and affective performance
NASA Astrophysics Data System (ADS)
Panaoura, Areti
2014-05-01
Self-efficacy beliefs in mathematics, as a dimension of the affective domain, are related with students' performance on solving tasks and mainly on overcoming cognitive obstacles. The present study investigated the interrelations of cognitive performance on geometry and young students' self-efficacy beliefs about using representations for solving geometrical tasks. The emphasis was on confirming a theoretical model for the primary-school and secondary-school students and identifying the differences and similarities for the two ages. A quantitative study was developed and data were collected from 1086 students in Grades 5-8. Confirmatory factor analysis affirmed the existence of a coherent model of affective dimensions about the use of representations for understanding the geometrical concepts, which becomes more stable across the educational levels.
Modeling the Losses of Dissolved CO(2) from Laser-Etched Champagne Glasses.
Liger-Belair, Gérard
2016-04-21
Under standard champagne tasting conditions, the complex interplay between the level of dissolved CO2 found in champagne, its temperature, the glass shape, and the bubbling rate definitely impacts champagne tasting by modifying the neuro-physicochemical mechanisms responsible for aroma release and flavor perception. On the basis of theoretical principles combining heterogeneous bubble nucleation, ascending bubble dynamics, and mass transfer equations, a global model is proposed, depending on various parameters of both the wine and the glass itself, which quantitatively provides the progressive losses of dissolved CO2 from laser-etched champagne glasses. The question of champagne temperature was closely examined, and its role on the modeled losses of dissolved CO2 was corroborated by a set of experimental data.
Viscoelastic love-type surface waves
Borcherdt, Roger D.
2008-01-01
The general theoretical solution for Love-Type surface waves in viscoelastic media provides theoreticalexpressions for the physical characteristics of the waves in elastic as well as anelastic media with arbitraryamounts of intrinsic damping. The general solution yields dispersion and absorption-coefficient curves for the waves as a function of frequency and theamount of intrinsic damping for any chosen viscoelastic model.Numerical results valid for a variety of viscoelastic models provide quantitative estimates of the physicalcharacteristics of the waves pertinent to models of Earth materials ranging from small amounts of damping in the Earth’s crust to moderate and large amounts of damping in soft soils and water-saturated sediments. Numerical results, presented herein, are valid for a wide range of solids and applications.
Mean-field theory for pedestrian outflow through an exit.
Yanagisawa, Daichi; Nishinari, Katsuhiro
2007-12-01
The average pedestrian flow through an exit is one of the most important indices in evaluating pedestrian dynamics. In order to study the flow in detail, the floor field model, which is a crowd model using cellular automata, is extended by taking into account realistic behavior of pedestrians around the exit. The model is studied by both numerical simulations and cluster analysis to obtain a theoretical expression for the average pedestrian flow through the exit. It is found quantitatively that the effects of exit door width, the wall, and the pedestrian mood of competition or cooperation significantly influence the average flow. The results show that there is a suitable width and position of the exit according to the pedestrians' mood.
Metallicity Differences in Type Ia Supernova Progenitors Inferred from Ultraviolet Spectra
NASA Astrophysics Data System (ADS)
Foley, Ryan J.; Kirshner, Robert P.
2013-05-01
Two "twin" Type Ia supernovae (SNe Ia), SNe 2011by and 2011fe, have extremely similar optical light-curve shapes, colors, and spectra, yet have different ultraviolet (UV) continua as measured in Hubble Space Telescope spectra and measurably different peak luminosities. We attribute the difference in the UV continua to significantly different progenitor metallicities. This is the first robust detection of different metallicities for SN Ia progenitors. Theoretical reasoning suggests that differences in metallicity also lead to differences in luminosity. SNe Ia with higher progenitor metallicities have lower 56Ni yields and lower luminosities for the same light-curve shape. SNe 2011by and 2011fe have different peak luminosities (ΔMV ≈ 0.6 mag), which correspond to different 56Ni yields: M_11fe(^{56}Ni) / M_11by(^{56}Ni) = 1.7^{+0.7}_{-0.5}. From theoretical models that account for different neutron-to-proton ratios in progenitors, the differences in 56Ni yields for SNe 2011by and 2011fe imply that their progenitor stars were above and below solar metallicity, respectively. Although we can distinguish progenitor metallicities in a qualitative way from UV data, the quantitative interpretation in terms of abundances is limited by the present state of theoretical models.
Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.; ...
2017-12-11
Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.
Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less
An Applied Physicist Does Econometrics
NASA Astrophysics Data System (ADS)
Taff, L. G.
2010-02-01
The biggest problem those attempting to understand econometric data, via modeling, have is that economics has no F = ma. Without a theoretical underpinning, econometricians have no way to build a good model to fit observations to. Physicists do, and when F = ma failed, we knew it. Still desiring to comprehend econometric data, applied economists turn to mis-applying probability theory---especially with regard to the assumptions concerning random errors---and choosing extremely simplistic analytical formulations of inter-relationships. This introduces model bias to an unknown degree. An applied physicist, used to having to match observations to a numerical or analytical model with a firm theoretical basis, modify the model, re-perform the analysis, and then know why, and when, to delete ``outliers'', is at a considerable advantage when quantitatively analyzing econometric data. I treat two cases. One is to determine the household density distribution of total assets, annual income, age, level of education, race, and marital status. Each of these ``independent'' variables is highly correlated with every other but only current annual income and level of education follow a linear relationship. The other is to discover the functional dependence of total assets on the distribution of assets: total assets has an amazingly tight power law dependence on a quadratic function of portfolio composition. Who knew? )
The Role of Spatially Controlled Cell Proliferation in Limb Bud Morphogenesis
Boehm, Bernd; Westerberg, Henrik; Lesnicar-Pucko, Gaja; Raja, Sahdia; Rautschka, Michael; Cotterell, James; Swoger, Jim; Sharpe, James
2010-01-01
Although the vertebrate limb bud has been studied for decades as a model system for spatial pattern formation and cell specification, the cellular basis of its distally oriented elongation has been a relatively neglected topic by comparison. The conventional view is that a gradient of isotropic proliferation exists along the limb, with high proliferation rates at the distal tip and lower rates towards the body, and that this gradient is the driving force behind outgrowth. Here we test this hypothesis by combining quantitative empirical data sets with computer modelling to assess the potential role of spatially controlled proliferation rates in the process of directional limb bud outgrowth. In particular, we generate two new empirical data sets for the mouse hind limb—a numerical description of shape change and a quantitative 3D map of cell cycle times—and combine these with a new 3D finite element model of tissue growth. By developing a parameter optimization approach (which explores spatial patterns of tissue growth) our computer simulations reveal that the observed distribution of proliferation rates plays no significant role in controlling the distally extending limb shape, and suggests that directional cell activities are likely to be the driving force behind limb bud outgrowth. This theoretical prediction prompted us to search for evidence of directional cell orientations in the limb bud mesenchyme, and we thus discovered a striking highly branched and extended cell shape composed of dynamically extending and retracting filopodia, a distally oriented bias in Golgi position, and also a bias in the orientation of cell division. We therefore provide both theoretical and empirical evidence that limb bud elongation is achieved by directional cell activities, rather than a PD gradient of proliferation rates. PMID:20644711
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
Scaling behavior of columnar structure during physical vapor deposition
NASA Astrophysics Data System (ADS)
Meese, W. J.; Lu, T.-M.
2018-02-01
The statistical effects of different conditions in physical vapor deposition, such as sputter deposition, have on thin film morphology has long been the subject of interest. One notable effect is that of column development due to differential chamber pressure in the well-known empirical model called the Thornton's Structure Zone Model. The model is qualitative in nature and theoretical understanding with quantitative predictions of the morphology is still lacking due, in part, to the absence of a quantitative description of the incident flux distribution on the growth front. In this work, we propose an incident Gaussian flux model developed from a series of binary hard-sphere collisions and simulate its effects using Monte Carlo methods and a solid-on-solid growth scheme. We also propose an approximate cosine-power distribution for faster Monte Carlo sampling. With this model, it is observed that higher chamber pressures widen the average deposition angle, and similarly increase the growth of column diameters (or lateral correlation length) and the column-to-column separation (film surface wavelength). We treat both the column diameter and the surface wavelength as power laws. It is seen that both the column diameter exponent and the wavelength exponent are very sensitive to changes in pressure for low pressures (0.13 Pa to 0.80 Pa); meanwhile, both exponents saturate for higher pressures (0.80 Pa to 6.7 Pa) around a value of 0.6. These predictions will serve as guides to future experiments for quantitative description of the film morphology under a wide range of vapor pressure.
What physicists should learn about finance (if they want to)
NASA Astrophysics Data System (ADS)
Schmidt, Anatoly
2006-03-01
There has been growing interest among physicists to Econophysics, i.e. analysis and modeling of financial and economic processes using the concepts of theoretical Physics. There has been also perception that the financial industry is a viable alternative for those physicists who are not able or are not willing to pursue career in their major field. However in our times, the Wall Street expects from applicants for quantitative positions not only the knowledge of the stochastic calculus and the methods of time series analysis but also of such concepts as option pricing, portfolio management, and risk measurement. Here I describe a synthetic course based on my book ``Quantitative Finance for Physicists'' (Elsevier, 2004) that outlines both worlds: Econophysics and Mathematical Finance. This course may be offered as elective for senior undergraduate or graduate Physics majors.
Quantitative image analysis of WE43-T6 cracking behavior
NASA Astrophysics Data System (ADS)
Ahmad, A.; Yahya, Z.
2013-06-01
Environment-assisted cracking of WE43 cast magnesium (4.2 wt.% Yt, 2.3 wt.% Nd, 0.7% Zr, 0.8% HRE) in the T6 peak-aged condition was induced in ambient air in notched specimens. The mechanism of fracture was studied using electron backscatter diffraction, serial sectioning and in situ observations of crack propagation. The intermetallic (rare earthed-enriched divorced intermetallic retained at grain boundaries and predominantly at triple points) material was found to play a significant role in initiating cracks which leads to failure of this material. Quantitative measurements were required for this project. The populations of the intermetallic and clusters of intermetallic particles were analyzed using image analysis of metallographic images. This is part of the work to generate a theoretical model of the effect of notch geometry on the static fatigue strength of this material.
Analysis techniques for tracer studies of oxidation. M. S. Thesis Final Report
NASA Technical Reports Server (NTRS)
Basu, S. N.
1984-01-01
Analysis techniques to obtain quantitative diffusion data from tracer concentration profiles were developed. Mass balance ideas were applied to determine the mechanism of oxide growth and to separate the fraction of inward and outward growth of oxide scales. The process of inward oxygen diffusion with exchange was theoretically modelled and the effect of lattice diffusivity, grain boundary diffusivity and grain size on the tracer concentration profile was studied. The development of the tracer concentration profile in a growing oxide scale was simulated. The double oxidation technique was applied to a FeCrAl-Zr alloy using 0-18 as a tracer. SIMS was used to obtain the tracer concentration profile. The formation of lacey oxide on the alloy was discussed. Careful consideration was given to the quality of data required to obtain quantitative information.
Tungsten Transport in the Core of JET H-mode Plasmas, Experiments and Modelling
NASA Astrophysics Data System (ADS)
Angioni, Clemente
2014-10-01
The physics of heavy impurity transport in tokamak plasmas plays an essential role towards the achievement of practical fusion energy. Reliable predictions of the behavior of these impurities require the development of realistic theoretical models and a complete understanding of present experiments, against which models can be validated. Recent experimental campaigns at JET with the ITER-like wall, with a W divertor, provide an extremely interesting and relevant opportunity to perform this combined experimental and theoretical research. Theoretical models of both neoclassical and turbulent transport must consistently include the impact of any poloidal asymmetry of the W density to enable quantitative predictions of the 2D W density distribution over the poloidal cross section. The agreement between theoretical predictions and experimentally reconstructed 2D W densities allows the identification of the main mechanisms which govern W transport in the core of JET H-mode plasmas. Neoclassical transport is largely enhanced by centrifugal effects and the neoclassical convection dominates, leading to central accumulation in the presence of central peaking of the density profiles and insufficiently peaked ion temperature profiles. The strength of the neoclassical temperature screening is affected by poloidal asymmetries. Only around mid-radius, turbulent diffusion offsets neoclassical transport. Consistently with observations in other devices, ion cyclotron resonance heating in the plasma center can flatten the electron density profile and peak the ion temperature profile and provide a means to reverse the neoclassical convection. MHD activity may hamper or speed up the accumulation process depending on mode number and plasma conditions. Finally, the relationship of JET results to a parallel modelling activity of the W behavior in the core of ASDEX Upgrade plasmas is presented. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement Number 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
NMR relaxation induced by iron oxide particles: testing theoretical models.
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
Theoretical modeling of low-energy electronic absorption bands in reduced cobaloximes
Bhattacharjee, Anirban; Chavarot-Kerlidou, Murielle; Dempsey, Jillian L.; ...
2014-08-11
Here, we report that the reduced Co(I) states of cobaloximes are powerful nucleophiles that play an important role in the hydrogen-evolving catalytic activity of these species. In this work we have analyzed the low energy electronic absorption bands of two cobaloxime systems experimentally and using a variety of density functional theory and molecular orbital ab initio quantum chemical approaches. Overall we find a reasonable qualitative understanding of the electronic excitation spectra of these compounds but show that obtaining quantitative results remains a challenging task.
Common Lognormal Behavior in Legal Systems
NASA Astrophysics Data System (ADS)
Yamamoto, Ken
2017-07-01
This study characterizes a statistical property of legal systems: the distribution of the number of articles in a law follows a lognormal distribution. This property is common to the Japanese, German, and Singaporean laws. To explain this lognormal behavior, tree structure of the law is analyzed. If the depth of a tree follows a normal distribution, the lognormal distribution of the number of articles can be theoretically derived. We analyze the structure of the Japanese laws using chapters, sections, and other levels of organization, and this analysis demonstrates that the proposed model is quantitatively reasonable.
Scratching as a Fracture Process: From Butter to Steel
NASA Astrophysics Data System (ADS)
Akono, A.-T.; Reis, P. M.; Ulm, F.-J.
2011-05-01
We present results of a hybrid experimental and theoretical investigation of the fracture scaling in scratch tests and show that scratching is a fracture dominated process. Validated for paraffin wax, cement paste, Jurassic limestone and steel, we derive a model that provides a quantitative means to relate quantities measured in scratch tests to fracture properties of materials at multiple scales. The scalability of scratching for different probes and depths opens new venues towards miniaturization of our technique, to extract fracture properties of materials at even smaller length scales.
An introduction to genetic quality in the context of sexual selection.
Pitcher, Trevor E; Mays, Herman L
2008-09-01
This special issue of Genetica brings together empirical researchers and theoreticians to present the latest on the evolutionary ecology of genetic quality in the context of sexual selection. The work comes from different fields of study including behavioral ecology, quantitative genetics and molecular genetics on a diversity of organisms using different approaches from comparative studies, mathematical modeling, field studies and laboratory experiments. The papers presented in this special issue primarily focus on genetic quality in relation to (1) sources of genetic variation, (2) polyandry, (3) new theoretical developments and (4) comprehensive reviews.
NASA Astrophysics Data System (ADS)
Förtsch, Christian; Dorfner, Tobias; Baumgartner, Julia; Werner, Sonja; von Kotzebue, Lena; Neuhaus, Birgit J.
2018-04-01
The German National Education Standards (NES) for biology were introduced in 2005. The content part of the NES emphasizes fostering conceptual knowledge. However, there are hardly any indications of what such an instructional implementation could look like. We introduce a theoretical framework of an instructional approach to foster students' conceptual knowledge as demanded in the NES (Fostering Conceptual Knowledge) including instructional practices derived from research on single core ideas, general psychological theories, and biology-specific features of instructional quality. First, we aimed to develop a rating manual, which is based on this theoretical framework. Second, we wanted to describe current German biology instruction according to this approach and to quantitatively analyze its effectiveness. And third, we aimed to provide qualitative examples of this approach to triangulate our findings. In a first step, we developed a theoretically devised rating manual to measure Fostering Conceptual Knowledge in videotaped lessons. Data for quantitative analysis included 81 videotaped biology lessons of 28 biology teachers from different German secondary schools. Six hundred forty students completed a questionnaire on their situational interest after each lesson and an achievement test. Results from multilevel modeling showed significant positive effects of Fostering Conceptual Knowledge on students' achievement and situational interest. For qualitative analysis, we contrasted instruction of four teachers, two with high and two with low student achievement and situational interest using the qualitative method of thematic analysis. Qualitative analysis revealed five main characteristics describing Fostering Conceptual Knowledge. Therefore, implementing Fostering Conceptual Knowledge in biology instruction seems promising. Examples of how to implement Fostering Conceptual Knowledge in instruction are shown and discussed.
Dual-Enrollment High-School Graduates' College-Enrollment Considerations
ERIC Educational Resources Information Center
Damrow, Roberta J.
2017-01-01
This quantitative study examined college enrollment considerations of dual-enrollment students enrolling at one Wisconsin credit-granting technical college. A combined college-choice theoretical framework guided this quantitative study that addressed two research questions: To what extent, if any, did the number of dual credits predict likelihood…
ERIC Educational Resources Information Center
Tan, Cheng Yong
2017-01-01
The present study reviewed quantitative empirical studies examining the relationship between cultural capital and student achievement. Results showed that researchers had conceptualized and measured cultural capital in different ways. It is argued that the more holistic understanding of the construct beyond highbrow cultural consumption must be…
Zou, Jiaqi; Li, Na
2013-09-01
Proper design of nucleic acid sequences is crucial for many applications. We have previously established a thermodynamics-based quantitative model to help design aptamer-based nucleic acid probes by predicting equilibrium concentrations of all interacting species. To facilitate customization of this thermodynamic model for different applications, here we present a generic and easy-to-use platform to implement the algorithm of the model with Microsoft(®) Excel formulas and VBA (Visual Basic for Applications) macros. Two Excel spreadsheets have been developed: one for the applications involving only nucleic acid species, the other for the applications involving both nucleic acid and non-nucleic acid species. The spreadsheets take the nucleic acid sequences and the initial concentrations of all species as input, guide the user to retrieve the necessary thermodynamic constants, and finally calculate equilibrium concentrations for all species in various bound and unbound conformations. The validity of both spreadsheets has been verified by comparing the modeling results with the experimental results on nucleic acid sequences reported in the literature. This Excel-based platform described here will allow biomedical researchers to rationalize the sequence design of nucleic acid probes using the thermodynamics-based modeling even without relevant theoretical and computational skills. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Information measures for terrain visualization
NASA Astrophysics Data System (ADS)
Bonaventura, Xavier; Sima, Aleksandra A.; Feixas, Miquel; Buckley, Simon J.; Sbert, Mateu; Howell, John A.
2017-02-01
Many quantitative and qualitative studies in geoscience research are based on digital elevation models (DEMs) and 3D surfaces to aid understanding of natural and anthropogenically-influenced topography. As well as their quantitative uses, the visual representation of DEMs can add valuable information for identifying and interpreting topographic features. However, choice of viewpoints and rendering styles may not always be intuitive, especially when terrain data are augmented with digital image texture. In this paper, an information-theoretic framework for object understanding is applied to terrain visualization and terrain view selection. From a visibility channel between a set of viewpoints and the component polygons of a 3D terrain model, we obtain three polygonal information measures. These measures are used to visualize the information associated with each polygon of the terrain model. In order to enhance the perception of the terrain's shape, we explore the effect of combining the calculated information measures with the supplementary digital image texture. From polygonal information, we also introduce a method to select a set of representative views of the terrain model. Finally, we evaluate the behaviour of the proposed techniques using example datasets. A publicly available framework for both the visualization and the view selection of a terrain has been created in order to provide the possibility to analyse any terrain model.
Fractional calculus phenomenology in two-dimensional plasma models
NASA Astrophysics Data System (ADS)
Gustafson, Kyle; Del Castillo Negrete, Diego; Dorland, Bill
2006-10-01
Transport processes in confined plasmas for fusion experiments, such as ITER, are not well-understood at the basic level of fully nonlinear, three-dimensional kinetic physics. Turbulent transport is invoked to describe the observed levels in tokamaks, which are orders of magnitude greater than the theoretical predictions. Recent results show the ability of a non-diffusive transport model to describe numerical observations of turbulent transport. For example, resistive MHD modeling of tracer particle transport in pressure-gradient driven turbulence for a three-dimensional plasma reveals that the superdiffusive (2̂˜t^α where α> 1) radial transport in this system is described quantitatively by a fractional diffusion equation Fractional calculus is a generalization involving integro-differential operators, which naturally describe non-local behaviors. Our previous work showed the quantitative agreement of special fractional diffusion equation solutions with numerical tracer particle flows in time-dependent linearized dynamics of the Hasegawa-Mima equation (for poloidal transport in a two-dimensional cold-ion plasma). In pursuit of a fractional diffusion model for transport in a gyrokinetic plasma, we now present numerical results from tracer particle transport in the nonlinear Hasegawa-Mima equation and a planar gyrokinetic model. Finite Larmor radius effects will be discussed. D. del Castillo Negrete, et al, Phys. Rev. Lett. 94, 065003 (2005).
Focus Article: Theoretical aspects of vapor/gas nucleation at structured surfaces
NASA Astrophysics Data System (ADS)
Meloni, Simone; Giacomello, Alberto; Casciola, Carlo Massimo
2016-12-01
Heterogeneous nucleation is the preferential means of formation of a new phase. Gas and vapor nucleation in fluids under confinement or at textured surfaces is central for many phenomena of technological relevance, such as bubble release, cavitation, and biological growth. Understanding and developing quantitative models for nucleation is the key to control how bubbles are formed and to exploit them in technological applications. An example is the in silico design of textured surfaces or particles with tailored nucleation properties. However, despite the fact that gas/vapor nucleation has been investigated for more than one century, many aspects still remain unclear and a quantitative theory is still lacking; this is especially true for heterogeneous systems with nanoscale corrugations, for which experiments are difficult. The objective of this focus article is analyzing the main results of the last 10-20 years in the field, selecting few representative works out of this impressive body of the literature, and highlighting the open theoretical questions. We start by introducing classical theories of nucleation in homogeneous and in simple heterogeneous systems and then discuss their extension to complex heterogeneous cases. Then we describe results from recent theories and computer simulations aimed at overcoming the limitations of the simpler theories by considering explicitly the diffuse nature of the interfaces, atomistic, kinetic, and inertial effects.
Jarnuczak, Andrew F.; Eyers, Claire E.; Schwartz, Jean‐Marc; Grant, Christopher M.
2015-01-01
Molecular chaperones play an important role in protein homeostasis and the cellular response to stress. In particular, the HSP70 chaperones in yeast mediate a large volume of protein folding through transient associations with their substrates. This chaperone interaction network can be disturbed by various perturbations, such as environmental stress or a gene deletion. Here, we consider deletions of two major chaperone proteins, SSA1 and SSB1, from the chaperone network in Sacchromyces cerevisiae. We employ a SILAC‐based approach to examine changes in global and local protein abundance and rationalise our results via network analysis and graph theoretical approaches. Although the deletions result in an overall increase in intracellular protein content, correlated with an increase in cell size, this is not matched by substantial changes in individual protein concentrations. Despite the phenotypic robustness to deletion of these major hub proteins, it cannot be simply explained by the presence of paralogues. Instead, network analysis and a theoretical consideration of folding workload suggest that the robustness to perturbation is a product of the overall network structure. This highlights how quantitative proteomics and systems modelling can be used to rationalise emergent network properties, and how the HSP70 system can accommodate the loss of major hubs. PMID:25689132
Tanase, Mihai; Waliszewski, Przemyslaw
2015-12-01
We propose a novel approach for the quantitative evaluation of aggressiveness in prostate carcinomas. The spatial distribution of cancer cell nuclei was characterized by the global spatial fractal dimensions D0, D1, and D2. Two hundred eighteen prostate carcinomas were stratified into the classes of equivalence using results of ROC analysis. A simulation of the cellular automata mix defined a theoretical frame for a specific geometric representation of the cell nuclei distribution called a local structure correlation diagram (LSCD). The LSCD and dispersion Hd were computed for each carcinoma. Data mining generated some quantitative criteria describing tumor aggressiveness. Alterations in tumor architecture along progression were associated with some changes in both shape and the quantitative characteristics of the LSCD consistent with those in the automata mix model. Low-grade prostate carcinomas with low complexity and very low biological aggressiveness are defined by the condition D0 < 1.545 and Hd < 38. High-grade carcinomas with high complexity and very high biological aggressiveness are defined by the condition D0 > 1.764 and Hd < 38. The novel homogeneity measure Hd identifies carcinomas with very low aggressiveness within the class of complexity C1 or carcinomas with very high aggressiveness in the class C7. © 2015 Wiley Periodicals, Inc.
Wang, Xueding; Xu, Yilian; Yang, Lu; Lu, Xiang; Zou, Hao; Yang, Weiqing; Zhang, Yuanyuan; Li, Zicheng; Ma, Menglin
2018-03-01
A series of 1,3,5-triazines were synthesized and their UV absorption properties were tested. The computational chemistry methods were used to construct quantitative structure-property relationship (QSPR), which was used to computer aided design of new 1,3,5-triazines ultraviolet rays absorber compounds. The experimental UV absorption data are in good agreement with those predicted data using the Time-dependent density functional theory (TD-DFT) [B3LYP/6-311 + G(d,p)]. A suitable forecasting model (R > 0.8, P < 0.0001) was revealed. Predictive three-dimensional quantitative structure-property relationship (3D-QSPR) model was established using multifit molecular alignment rule of Sybyl program, which conclusion is consistent with the TD-DFT calculation. The exceptional photostability mechanism of such ultraviolet rays absorber compounds was studied and confirmed as principally banked upon their ability to undergo excited-state deactivation via an ultrafast excited-state proton transfer (ESIPT). The intramolecular hydrogen bond (IMHB) of 1,3,5-triazines compounds is the basis for the excited state proton transfer, which was explored by IR spectroscopy, UV spectra, structural and energetic aspects of different conformers and frontier molecular orbitals analysis.
Quantitation in chiral capillary electrophoresis: theoretical and practical considerations.
D'Hulst, A; Verbeke, N
1994-06-01
Capillary electrophoresis (CE) represents a decisive step forward in stereoselective analysis. The present paper deals with the theoretical aspects of the quantitation of peak separation in chiral CE. Because peak shape is very different in CE with respect to high performance liquid chromatography (HPLC), the resolution factor Rs, commonly used to describe the extent of separation between enantiomers as well as unrelated compounds, is demonstrated to be of limited value for the assessment of chiral separations in CE. Instead, the conjunct use of a relative chiral separation factor (RCS) and the percent chiral separation (% CS) is advocated. An array of examples is given to illustrate this. The practical aspects of method development using maltodextrins--which have been proposed previously as a major innovation in chiral selectors applicable in CE--are documented with the stereoselective analysis of coumarinic anticoagulant drugs. The possibilities of quantitation using CE were explored under two extreme conditions. Using ibuprofen, it has been demonstrated that enantiomeric excess determinations are possible down to a 1% level of optical contamination and stereoselective determinations are still possible with a good precision near the detection limit, increasing sample load by very long injection times. The theoretical aspects of this possibility are addressed in the discussion.
The evolution of labile traits in sex- and age-structured populations.
Childs, Dylan Z; Sheldon, Ben C; Rees, Mark
2016-03-01
Many quantitative traits are labile (e.g. somatic growth rate, reproductive timing and investment), varying over the life cycle as a result of behavioural adaptation, developmental processes and plastic responses to the environment. At the population level, selection can alter the distribution of such traits across age classes and among generations. Despite a growing body of theoretical research exploring the evolutionary dynamics of labile traits, a data-driven framework for incorporating such traits into demographic models has not yet been developed. Integral projection models (IPMs) are increasingly being used to understand the interplay between changes in labile characters, life histories and population dynamics. One limitation of the IPM approach is that it relies on phenotypic associations between parents and offspring traits to capture inheritance. However, it is well-established that many different processes may drive these associations, and currently, no clear consensus has emerged on how to model micro-evolutionary dynamics in an IPM framework. We show how to embed quantitative genetic models of inheritance of labile traits into age-structured, two-sex models that resemble standard IPMs. Commonly used statistical tools such as GLMs and their mixed model counterparts can then be used for model parameterization. We illustrate the methodology through development of a simple model of egg-laying date evolution, parameterized using data from a population of Great tits (Parus major). We demonstrate how our framework can be used to project the joint dynamics of species' traits and population density. We then develop a simple extension of the age-structured Price equation (ASPE) for two-sex populations, and apply this to examine the age-specific contributions of different processes to change in the mean phenotype and breeding value. The data-driven framework we outline here has the potential to facilitate greater insight into the nature of selection and its consequences in settings where focal traits vary over the lifetime through ontogeny, behavioural adaptation and phenotypic plasticity, as well as providing a potential bridge between theoretical and empirical studies of labile trait variation. © 2016 The Authors Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
Stochasticity in the signalling network of a model microbe
NASA Astrophysics Data System (ADS)
Bischofs, Ilka; Foley, Jonathan; Battenberg, Eric; Fontaine-Bodin, Lisa; Price, Gavin; Wolf, Denise; Arkin, Adam
2007-03-01
The soil dwelling bacterium Bacillus subtilis is an excellent model organism for studying stochastic stress response induction in an isoclonal population. Subjected to the same stressor cells undergo different cell fates, including sporulation, competence, degradative enzyme synthesis and motility. For example, under conditions of nutrient deprivation and high cell density only a portion of the cell population forms an endospore. Here we use a combined experimental and theoretical approach to study stochastic sporulation induction in Bacillus subtilis. Using several fluorescent reporter strains we apply time lapse fluorescent microscopy in combination with quantitative image analysis to study cell fate progression on a single cell basis and elucidate key noise generators in the underlying cellular network.
Man-machine analysis of translation and work tasks of Skylab films
NASA Technical Reports Server (NTRS)
Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.
1979-01-01
An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.
Ranking Theory and Conditional Reasoning.
Skovgaard-Olsen, Niels
2016-05-01
Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.
Thermal comparison of buried-heterostructure and shallow-ridge lasers
NASA Astrophysics Data System (ADS)
Rustichelli, V.; Lemaître, F.; Ambrosius, H. P. M. M.; Brenot, R.; Williams, K. A.
2018-02-01
We present finite difference thermal modeling to predict temperature distribution, heat flux, and thermal resistance inside lasers with different waveguide geometries. We provide a quantitative experimental and theoretical comparison of the thermal behavior of shallow-ridge (SR) and buried-heterostructure (BH) lasers. We investigate the influence of a split heat source to describe p-layer Joule heating and nonradiative energy loss in the active layer and the heat-sinking from top as well as bottom when quantifying thermal impedance. From both measured values and numerical modeling we can quantify the thermal resistance for BH lasers and SR lasers, showing an improved thermal performance from 50K/W to 30K/W for otherwise equivalent BH laser designs.
NASA Technical Reports Server (NTRS)
Russell, John M.
1994-01-01
In the leak testing of a large engineering system, one may distinguish three stages, namely leakage measurement by an overall enclosure, leak location, and leakage measurement by a local enclosure. Sniffer probes attached to helium mass spectrometer leak detectors are normally designed for leak location, a qualitative inspection technique intended to pinpoint where a leak is but not to quantify its rate of discharge. The main conclusion of the present effort is that local leakage measurement by a leak detector with a sniffer probe is feasible provided one has: (1) quantitative data on the performance of the mass separator cell (a device interior to the unit where the stream of fluid in the sample line branches); and (2) a means of stabilizing the mass transfer boundary layer that is created near a local leak site when a sniffer probe is placed in its immediate vicinity. Theoretical models of the mass separator cell are provided and measurements of the machine-specific parameters in the formulas are presented. A theoretical model of a porous probe end for stabilizing the mass transfer boundary is also presented.
NASA Astrophysics Data System (ADS)
Russell, John M.
1994-10-01
In the leak testing of a large engineering system, one may distinguish three stages, namely leakage measurement by an overall enclosure, leak location, and leakage measurement by a local enclosure. Sniffer probes attached to helium mass spectrometer leak detectors are normally designed for leak location, a qualitative inspection technique intended to pinpoint where a leak is but not to quantify its rate of discharge. The main conclusion of the present effort is that local leakage measurement by a leak detector with a sniffer probe is feasible provided one has: (1) quantitative data on the performance of the mass separator cell (a device interior to the unit where the stream of fluid in the sample line branches); and (2) a means of stabilizing the mass transfer boundary layer that is created near a local leak site when a sniffer probe is placed in its immediate vicinity. Theoretical models of the mass separator cell are provided and measurements of the machine-specific parameters in the formulas are presented. A theoretical model of a porous probe end for stabilizing the mass transfer boundary is also presented.
Polidori, G; Marreiro, A; Pron, H; Lestriez, P; Boyer, F C; Quinart, H; Tourbah, A; Taïar, R
2016-11-01
This article establishes the basics of a theoretical model for the constitutive law that describes the skin temperature and thermolysis heat losses undergone by a subject during a session of whole-body cryotherapy (WBC). This study focuses on the few minutes during which the human body is subjected to a thermal shock. The relationship between skin temperature and thermolysis heat losses during this period is still unknown and have not yet been studied in the context of the whole human body. The analytical approach here is based on the hypothesis that the skin thermal shock during a WBC session can be thermally modelled by the sum of both radiative and free convective heat transfer functions. The validation of this scientific approach and the derivation of temporal evolution thermal laws, both on skin temperature and dissipated thermal power during the thermal shock open many avenues of large scale studies with the aim of proposing individualized cryotherapy protocols as well as protocols intended for target populations. Furthermore, this study shows quantitatively the substantial imbalance between human metabolism and thermolysis during WBC, the explanation of which remains an open question. Copyright © 2016 Elsevier Ltd. All rights reserved.
Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation.
Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro
2016-10-24
The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals' social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.
Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation
NASA Astrophysics Data System (ADS)
Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro
2016-10-01
The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals’ social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.
Multiblob coarse-graining for mixtures of long polymers and soft colloids
NASA Astrophysics Data System (ADS)
Locatelli, Emanuele; Capone, Barbara; Likos, Christos N.
2016-11-01
Soft nanocomposites represent both a theoretical and an experimental challenge due to the high number of the microscopic constituents that strongly influence the behaviour of the systems. An effective theoretical description of such systems invokes a reduction of the degrees of freedom to be analysed, hence requiring the introduction of an efficient, quantitative, coarse-grained description. We here report on a novel coarse graining approach based on a set of transferable potentials that quantitatively reproduces properties of mixtures of linear and star-shaped homopolymeric nanocomposites. By renormalizing groups of monomers into a single effective potential between a f-functional star polymer and an homopolymer of length N0, and through a scaling argument, it will be shown how a substantial reduction of the to degrees of freedom allows for a full quantitative description of the system. Our methodology is tested upon full monomer simulations for systems of different molecular weight, proving its full predictive potential.
Identifying, Analyzing, and Communicating Rural: A Quantitative Perspective
ERIC Educational Resources Information Center
Koziol, Natalie A.; Arthur, Ann M.; Hawley, Leslie R.; Bovaird, James A.; Bash, Kirstie L.; McCormick, Carina; Welch, Greg W.
2015-01-01
Defining rural is a critical task for rural education researchers, as it has implications for all phases of a study. However, it is also a difficult task due to the many ways in which rural can be theoretically, conceptually, and empirically operationalized. This article provides researchers with specific guidance on important theoretical and…
Tholen, Danny; Zhu, Xin-Guang
2011-05-01
Photosynthesis is limited by the conductance of carbon dioxide (CO(2)) from intercellular spaces to the sites of carboxylation. Although the concept of internal conductance (g(i)) has been known for over 50 years, shortcomings in the theoretical description of this process may have resulted in a limited understanding of the underlying mechanisms. To tackle this issue, we developed a three-dimensional reaction-diffusion model of photosynthesis in a typical C(3) mesophyll cell that includes all major components of the CO(2) diffusion pathway and associated reactions. Using this novel systems model, we systematically and quantitatively examined the mechanisms underlying g(i). Our results identify the resistances of the cell wall and chloroplast envelope as the most significant limitations to photosynthesis. In addition, the concentration of carbonic anhydrase in the stroma may also be limiting for the photosynthetic rate. Our analysis demonstrated that higher levels of photorespiration increase the apparent resistance to CO(2) diffusion, an effect that has thus far been ignored when determining g(i). Finally, we show that outward bicarbonate leakage through the chloroplast envelope could contribute to the observed decrease in g(i) under elevated CO(2). Our analysis suggests that physiological and anatomical features associated with g(i) have been evolutionarily fine-tuned to benefit CO(2) diffusion and photosynthesis. The model presented here provides a novel theoretical framework to further analyze the mechanisms underlying diffusion processes in the mesophyll.
Tholen, Danny; Zhu, Xin-Guang
2011-01-01
Photosynthesis is limited by the conductance of carbon dioxide (CO2) from intercellular spaces to the sites of carboxylation. Although the concept of internal conductance (gi) has been known for over 50 years, shortcomings in the theoretical description of this process may have resulted in a limited understanding of the underlying mechanisms. To tackle this issue, we developed a three-dimensional reaction-diffusion model of photosynthesis in a typical C3 mesophyll cell that includes all major components of the CO2 diffusion pathway and associated reactions. Using this novel systems model, we systematically and quantitatively examined the mechanisms underlying gi. Our results identify the resistances of the cell wall and chloroplast envelope as the most significant limitations to photosynthesis. In addition, the concentration of carbonic anhydrase in the stroma may also be limiting for the photosynthetic rate. Our analysis demonstrated that higher levels of photorespiration increase the apparent resistance to CO2 diffusion, an effect that has thus far been ignored when determining gi. Finally, we show that outward bicarbonate leakage through the chloroplast envelope could contribute to the observed decrease in gi under elevated CO2. Our analysis suggests that physiological and anatomical features associated with gi have been evolutionarily fine-tuned to benefit CO2 diffusion and photosynthesis. The model presented here provides a novel theoretical framework to further analyze the mechanisms underlying diffusion processes in the mesophyll. PMID:21441385
Duality, Gauge Symmetries, Renormalization Groups and the BKT Transition
NASA Astrophysics Data System (ADS)
José, Jorge V.
2017-03-01
In this chapter, I will briefly review, from my own perspective, the situation within theoretical physics at the beginning of the 1970s, and the advances that played an important role in providing a solid theoretical and experimental foundation for the Berezinskii-Kosterlitz-Thouless theory (BKT). Over this period, it became clear that the Abelian gauge symmetry of the 2D-XY model had to be preserved to get the right phase structure of the model. In previous analyses, this symmetry was broken when using low order calculational approximations. Duality transformations at that time for two-dimensional models with compact gauge symmetries were introduced by José, Kadanoff, Nelson and Kirkpatrick (JKKN). Their goal was to analyze the phase structure and excitations of XY and related models, including symmetry breaking fields which are experimentally important. In a separate context, Migdal had earlier developed an approximate Renormalization Group (RG) algorithm to implement Wilson’s RG for lattice gauge theories. Although Migdal’s RG approach, later extended by Kadanoff, did not produce a true phase transition for the XY model, it almost did asymptotically in terms of a non-perturbative expansion in the coupling constant with an essential singularity. Using these advances, including work done on instantons (vortices), JKKN analyzed the behavior of the spin-spin correlation functions of the 2D XY-model in terms of an expansion in temperature and vortex-pair fugacity. Their analysis led to a perturbative derivation of RG equations for the XY model which are the same as those first derived by Kosterlitz for the two-dimensional Coulomb gas. JKKN’s results gave a theoretical formulation foundation and justification for BKT’s sound physical assumptions and for the validity of their calculational approximations that were, in principle, strictly valid only at very low temperatures, away from the critical TBKT temperature. The theoretical predictions were soon tested successfully against experimental results on superfluid helium films. The success of the BKT theory also gave one of the first quantitative proofs of the validity of the RG theory.
Duality, Gauge Symmetries, Renormalization Groups and the BKT Transition
NASA Astrophysics Data System (ADS)
José, Jorge V.
2013-06-01
In this chapter, I will briefly review, from my own perspective, the situation within theoretical physics at the beginning of the 1970s, and the advances that played an important role in providing a solid theoretical and experimental foundation for the Berezinskii-Kosterlitz-Thouless theory (BKT). Over this period, it became clear that the Abelian gauge symmetry of the 2D-XY model had to be preserved to get the right phase structure of the model. In previous analyses, this symmetry was broken when using low order calculational approximations. Duality transformations at that time for two-dimensional models with compact gauge symmetries were introduced by José, Kadanoff, Nelson and Kirkpatrick (JKKN). Their goal was to analyze the phase structure and excitations of XY and related models, including symmetry breaking fields which are experimentally important. In a separate context, Migdal had earlier developed an approximate Renormalization Group (RG) algorithm to implement Wilson's RG for lattice gauge theories. Although Migdal's RG approach, later extended by Kadanoff, did not produce a true phase transition for the XY model, it almost did asymptotically in terms of a non-perturbative expansion in the coupling constant with an essential singularity. Using these advances, including work done on instantons (vortices), JKKN analyzed the behavior of the spin-spin correlation functions of the 2D XY-model in terms of an expansion in temperature and vortex-pair fugacity. Their analysis led to a perturbative derivation of RG equations for the XY model which are the same as those first derived by Kosterlitz for the two-dimensional Coulomb gas. JKKN's results gave a theoretical formulation foundation and justification for BKT's sound physical assumptions and for the validity of their calculational approximations that were, in principle, strictly valid only at very low temperatures, away from the critical TBKT temperature. The theoretical predictions were soon tested successfully against experimental results on superfluid helium films. The success of the BKT theory also gave one of the first quantitative proofs of the validity of the RG theory...
A random walk in physical biology
NASA Astrophysics Data System (ADS)
Peterson, Eric Lee
Biology as a scientific discipline is becoming evermore quantitative as tools become available to probe living systems on every scale from the macro to the micro and now even to the nanoscale. In quantitative biology the challenge is to understand the living world in an in vivo context, where it is often difficult for simple theoretical models to connect with the full richness and complexity of the observed data. Computational models and simulations offer a way to bridge the gap between simple theoretical models and real biological systems; towards that aspiration are presented in this thesis three case studies in applying computational models that may give insight into native biological structures.The first is concerned with soluble proteins; proteins, like DNA, are linear polymers written in a twenty-letter "language" of amino acids. Despite the astronomical number of possible proteins sequences, a great amount of similarity is observed among the folded structures of globular proteins. One useful way of discovering similar sequences is to align their sequences, as done e.g. by the popular BLAST program. By clustering together amino acids and reducing the alphabet that proteins are written in to fewer than twenty letters, we find that pairwise sequence alignments are actually more sensitive to proteins with similar structures.The second case study is concerned with the measurement of forces applied to a membrane. We demonstrate a general method for extracting the forces applied to a fluid lipid bilayer of arbitrary shape and show that the subpiconewton forces applied by optical tweezers to vesicles can be accurately measured in this way.In the third and final case study we examine the forces between proteins in a lipid bilayer membrane. Due to the bending of the membrane surrounding them, such proteins feel mutually attractive forces which can help them to self-organize and act in concert. These finding are relevant at the areal densities estimated for membrane proteins such as the MscL mechanosensitive channel. The findings of the analytical studies were confirmed by a Monte Carlo Markov Chain simulation using the fully two-dimensional potentials between two model proteins in a membrane.Living systems present us with beautiful and intricate structures, from the helices and sheets of a folded protein to the dynamic morphology of cellular organelles and the self-organization of proteins in a biomembrane and a synergy of theoretical and it in silico approaches should enable us to build and refine models of in vivo biological data.
Non-Invasive Investigation of Bone Adaptation in Humans to Mechanical Loading
NASA Technical Reports Server (NTRS)
Whalen, R.
1999-01-01
Experimental studies have identified peak cyclic forces, number of loading cycles, and loading rate as contributors to the regulation of bone metabolism. We have proposed a theoretical model that relates bone density to a mechanical stimulus derived from average daily cumulative peak cyclic 'effective' tissue stresses. In order to develop a non-invasive experimental model to test the theoretical model we need to: (1) monitor daily cumulative loading on a bone, (2) compute the internal stress state(s) resulting from the imposed loading, and (3) image volumetric bone density accurately, precisely, and reproducibly within small contiguous volumes throughout the bone. We have chosen the calcaneus (heel) as an experimental model bone site because it is loaded by ligament, tendon and joint contact forces in equilibrium with daily ground reaction forces that we can measure; it is a peripheral bone site and therefore more easily and accurately imaged with computed tomography; it is composed primarily of cancellous bone; and it is a relevant site for monitoring bone loss and adaptation in astronauts and the general population. This paper presents an overview of our recent advances in the areas of monitoring daily ground reaction forces, biomechanical modeling of the forces on the calcaneus during gait, mathematical modeling of calcaneal bone adaptation in response to cumulative daily activity, accurate and precise imaging of the calcaneus with quantitative computed tomography (QCT), and application to long duration space flight.
From information theory to quantitative description of steric effects.
Alipour, Mojtaba; Safari, Zahra
2016-07-21
Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.
On the buckling of an elastic holey column
Hazel, A. L.; Pihler-Puzović, D.
2017-01-01
We report the results of a numerical and theoretical study of buckling in elastic columns containing a line of holes. Buckling is a common failure mode of elastic columns under compression, found over scales ranging from metres in buildings and aircraft to tens of nanometers in DNA. This failure usually occurs through lateral buckling, described for slender columns by Euler’s theory. When the column is perforated with a regular line of holes, a new buckling mode arises, in which adjacent holes collapse in orthogonal directions. In this paper, we firstly elucidate how this alternate hole buckling mode coexists and interacts with classical Euler buckling modes, using finite-element numerical calculations with bifurcation tracking. We show how the preferred buckling mode is selected by the geometry, and discuss the roles of localized (hole-scale) and global (column-scale) buckling. Secondly, we develop a novel predictive model for the buckling of columns perforated with large holes. This model is derived without arbitrary fitting parameters, and quantitatively predicts the critical strain for buckling. We extend the model to sheets perforated with a regular array of circular holes and use it to provide quantitative predictions of their buckling. PMID:29225498
Speciation of adsorbates on surface of solids by infrared spectroscopy and chemometrics.
Vilmin, Franck; Bazin, Philippe; Thibault-Starzyk, Frédéric; Travert, Arnaud
2015-09-03
Speciation, i.e. identification and quantification, of surface species on heterogeneous surfaces by infrared spectroscopy is important in many fields but remains a challenging task when facing strongly overlapped spectra of multiple adspecies. Here, we propose a new methodology, combining state of the art instrumental developments for quantitative infrared spectroscopy of adspecies and chemometrics tools, mainly a novel data processing algorithm, called SORB-MCR (SOft modeling by Recursive Based-Multivariate Curve Resolution) and multivariate calibration. After formal transposition of the general linear mixture model to adsorption spectral data, the main issues, i.e. validity of Beer-Lambert law and rank deficiency problems, are theoretically discussed. Then, the methodology is exposed through application to two case studies, each of them characterized by a specific type of rank deficiency: (i) speciation of physisorbed water species over a hydrated silica surface, and (ii) speciation (chemisorption and physisorption) of a silane probe molecule over a dehydrated silica surface. In both cases, we demonstrate the relevance of this approach which leads to a thorough surface speciation based on comprehensive and fully interpretable multivariate quantitative models. Limitations and drawbacks of the methodology are also underlined. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantitative test for concave aspheric surfaces using a Babinet compensator.
Saxena, A K
1979-08-15
A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.
NASA Astrophysics Data System (ADS)
Grimm, Guido W.; Potts, Alastair J.
2016-03-01
The Coexistence Approach has been used to infer palaeoclimates for many Eurasian fossil plant assemblages. However, the theory that underpins the method has never been examined in detail. Here we discuss acknowledged and implicit assumptions and assess the statistical nature and pseudo-logic of the method. We also compare the Coexistence Approach theory with the active field of species distribution modelling. We argue that the assumptions will inevitably be violated to some degree and that the method lacks any substantive means to identify or quantify these violations. The absence of a statistical framework makes the method highly vulnerable to the vagaries of statistical outliers and exotic elements. In addition, we find numerous logical inconsistencies, such as how climate shifts are quantified (the use of a "centre value" of a coexistence interval) and the ability to reconstruct "extinct" climates from modern plant distributions. Given the problems that have surfaced in species distribution modelling, accurate and precise quantitative reconstructions of palaeoclimates (or even climate shifts) using the nearest-living-relative principle and rectilinear niches (the basis of the method) will not be possible. The Coexistence Approach can be summarised as an exercise that shoehorns a plant fossil assemblage into coexistence and then assumes that this must be the climate. Given the theoretical issues and methodological issues highlighted elsewhere, we suggest that the method be discontinued and that all past reconstructions be disregarded and revisited using less fallacious methods. We outline six steps for (further) validation of available and future taxon-based methods and advocate developing (semi-quantitative) methods that prioritise robustness over precision.
Orbiting pairs of walking droplets: Dynamics and stability
NASA Astrophysics Data System (ADS)
Oza, Anand U.; Siéfert, Emmanuel; Harris, Daniel M.; Moláček, Jan; Bush, John W. M.
2017-05-01
A decade ago, Couder and Fort [Phys. Rev. Lett. 97, 154101 (2006)], 10.1103/PhysRevLett.97.154101 discovered that a millimetric droplet sustained on the surface of a vibrating fluid bath may self-propel through a resonant interaction with its own wave field. We here present the results of a combined experimental and theoretical investigation of the interactions of such walking droplets. Specifically, we delimit experimentally the different regimes for an orbiting pair of identical walkers and extend the theoretical model of Oza et al. [J. Fluid Mech. 737, 552 (2013)], 10.1017/jfm.2013.581 in order to rationalize our observations. A quantitative comparison between experiment and theory highlights the importance of spatial damping of the wave field. Our results also indicate that walkers adapt their impact phase according to the local wave height, an effect that stabilizes orbiting bound states.
Critical diversity: Divided or united states of social coordination
Kelso, J. A. Scott; Tognoli, Emmanuelle
2018-01-01
Much of our knowledge of coordination comes from studies of simple, dyadic systems or systems containing large numbers of components. The huge gap ‘in between’ is seldom addressed, empirically or theoretically. We introduce a new paradigm to study the coordination dynamics of such intermediate-sized ensembles with the goal of identifying key mechanisms of interaction. Rhythmic coordination was studied in ensembles of eight people, with differences in movement frequency (‘diversity’) manipulated within the ensemble. Quantitative change in diversity led to qualitative changes in coordination, a critical value separating régimes of integration and segregation between groups. Metastable and multifrequency coordination between participants enabled communication across segregated groups within the ensemble, without destroying overall order. These novel findings reveal key factors underlying coordination in ensemble sizes previously considered too complicated or 'messy' for systematic study and supply future theoretical/computational models with new empirical checkpoints. PMID:29617371
Effect of deposition rate and NNN interactions on adatoms mobility in epitaxial growth
NASA Astrophysics Data System (ADS)
Hamouda, Ajmi B. H.; Mahjoub, B.; Blel, S.
2017-07-01
This paper provides a detailed analysis of the surface diffusion problem during epitaxial step-flow growth using a simple theoretical model for the diffusion equation of adatoms concentration. Within this framework, an analytical expression for the adatom mobility as a function of the deposition rate and the Next-Nearest-Neighbor (NNN) interactions is derived and compared with the effective mobility computed from kinetic Monte Carlo (kMC) simulations. As far as the 'small' step velocity or relatively weak deposition rate commonly used for copper growth is concerned, an excellent quantitative agreement with the theoretical prediction is found. The effective adatoms mobility is shown to exhibit an exponential decrease with NNN interactions strength and increases in roughly linear behavior versus deposition rate F. The effective step stiffness and the adatoms mobility are also shown to be closely related to the concentration of kinks.
NASA's upper atmosphere research satellite: A program to study global ozone change
NASA Technical Reports Server (NTRS)
Luther, Michael R.
1992-01-01
The Upper Atmosphere Research Satellite (UARS) is a major initiative in the NASA Office of Space Science and Applications, and is the prototype for NASA's Earth Observing System (EOS) planned for launch in the 1990s. The UARS combines a balanced program of experimental and theoretical investigations to perform diagnostic studies, qualitative model analysis, and quantitative measurements and comparative studies of the upper atmosphere. UARS provides theoretical and experimental investigations which pursue four specific research topics: atmospheric energy budget, chemistry, dynamics, and coupling processes. An international cadre of investigators was assembled by NASA to accomplish those scientific objectives. The observatory, its complement of ten state of the art instruments, and the ground system are nearing flight readiness. The timely UARS program will play a major role in providing data to understand the complex physical and chemical processes occurring in the upper atmosphere and answering many questions regarding the health of the ozone layer.
Contribution to study of interfaces instabilities in plane, cylindrical and spherical geometry
NASA Astrophysics Data System (ADS)
Toque, Nathalie
1996-12-01
This thesis proposes several experiments of hydrodynamical instabilities which are studied, numerically and theoretically. The experiments are in plane and cylindrical geometry. Their X-ray radiographies show the evolution of an interface between two solid media crossed by a detonation wave. These materials are initially solid. They become liquide under shock wave or stay between two phases, solid and liquid. The numerical study aims at simulating with the codes EAD and Ouranos, the interfaces instabilities which appear in the experiments. The experimental radiographies and the numerical pictures are in quite good agreement. The theoretical study suggests to modelise a spatio-temporal part of the experiments to obtain the quantitative development of perturbations at the interfaces and in the flows. The models are linear and in plane, cylindrical and spherical geometry. They preceed the inoming study of transition between linear and non linear development of instabilities in multifluids flows crossed by shock waves.
Tunable quasiparticle trapping in Meissner and vortex states of mesoscopic superconductors.
Taupin, M; Khaymovich, I M; Meschke, M; Mel'nikov, A S; Pekola, J P
2016-03-16
Nowadays, superconductors serve in numerous applications, from high-field magnets to ultrasensitive detectors of radiation. Mesoscopic superconducting devices, referring to those with nanoscale dimensions, are in a special position as they are easily driven out of equilibrium under typical operating conditions. The out-of-equilibrium superconductors are characterized by non-equilibrium quasiparticles. These extra excitations can compromise the performance of mesoscopic devices by introducing, for example, leakage currents or decreased coherence time in quantum devices. By applying an external magnetic field, one can conveniently suppress or redistribute the population of excess quasiparticles. In this article, we present an experimental demonstration and a theoretical analysis of such effective control of quasiparticles, resulting in electron cooling both in the Meissner and vortex states of a mesoscopic superconductor. We introduce a theoretical model of quasiparticle dynamics, which is in quantitative agreement with the experimental data.
Tunable quasiparticle trapping in Meissner and vortex states of mesoscopic superconductors
Taupin, M.; Khaymovich, I. M.; Meschke, M.; Mel'nikov, A. S.; Pekola, J. P.
2016-01-01
Nowadays, superconductors serve in numerous applications, from high-field magnets to ultrasensitive detectors of radiation. Mesoscopic superconducting devices, referring to those with nanoscale dimensions, are in a special position as they are easily driven out of equilibrium under typical operating conditions. The out-of-equilibrium superconductors are characterized by non-equilibrium quasiparticles. These extra excitations can compromise the performance of mesoscopic devices by introducing, for example, leakage currents or decreased coherence time in quantum devices. By applying an external magnetic field, one can conveniently suppress or redistribute the population of excess quasiparticles. In this article, we present an experimental demonstration and a theoretical analysis of such effective control of quasiparticles, resulting in electron cooling both in the Meissner and vortex states of a mesoscopic superconductor. We introduce a theoretical model of quasiparticle dynamics, which is in quantitative agreement with the experimental data. PMID:26980225
Fermi-LAT upper limits on gamma-ray emission from colliding wind binaries
Werner, Michael; Reimer, O.; Reimer, A.; ...
2013-07-09
Here, colliding wind binaries (CWBs) are thought to give rise to a plethora of physical processes including acceleration and interaction of relativistic particles. Observation of synchrotron radiation in the radio band confirms there is a relativistic electron population in CWBs. Accordingly, CWBs have been suspected sources of high-energy γ-ray emission since the COS-B era. Theoretical models exist that characterize the underlying physical processes leading to particle acceleration and quantitatively predict the non-thermal energy emission observable at Earth. Furthermore, we strive to find evidence of γ-ray emission from a sample of seven CWB systems: WR 11, WR 70, WR 125, WRmore » 137, WR 140, WR 146, and WR 147. Theoretical modelling identified these systems as the most favourable candidates for emitting γ-rays. We make a comparison with existing γ-ray flux predictions and investigate possible constraints. We used 24 months of data from the Large Area Telescope (LAT) on-board the Fermi Gamma Ray Space Telescope to perform a dedicated likelihood analysis of CWBs in the LAT energy range. As a result, we find no evidence of γ-ray emission from any of the studied CWB systems and determine corresponding flux upper limits. For some CWBs the interplay of orbital and stellar parameters renders the Fermi-LAT data not sensitive enough to constrain the parameter space of the emission models. In the cases of WR140 and WR147, the Fermi -LAT upper limits appear to rule out some model predictions entirely and constrain theoretical models over a significant parameter space. A comparison of our findings to the CWB η Car is made.« less
Controlling 212Bi to 212Pb activity concentration ratio in thoron chambers.
He, Zhengzhong; Xiao, Detao; Lv, Lidan; Zhou, Qingzhi; Shan, Jian; Qiu, Shoukang; Wu, Xijun
2017-11-01
It is necessary to establish a reference atmosphere in a thoron chamber containing various ratios of 212 Bi to 212 Pb activity concentrations (C( 212 Bi)/C( 212 Pb)) to simulate typical environmental conditions (e.g., indoor or underground atmospheres). In this study, a novel method was developed for establishing and controlling C( 212 Bi)/C( 212 Pb) in a thoron chamber system based on an aging chamber and air recirculation loops which alter the ventilation rate. The effects of main factors on the C( 212 Bi)/C( 212 Pb) were explored, and a steady-state theoretical model was derived to calculate the ratio. The results show that the C( 212 Bi)/C( 212 Pb) inside the chamber is mainly dependent on ventilation rate. Ratios ranging from 0.33 to 0.83 are available under various ventilation. The stability coefficient of the ratios is better than 7%. The experimental results are close to the theoretical calculated results, which indicates that the model can serve as a guideline for the quantitative control of C( 212 Bi)/C( 212 Pb). Copyright © 2017 Elsevier Ltd. All rights reserved.
Layer contributions to the nonlinear acoustic radiation from stratified media.
Vander Meulen, François; Haumesser, Lionel
2016-12-01
This study presents the thorough investigation of the second harmonic generation scenario in a three fluid layer system. An emphasis is on the evaluation of the nonlinear parameter B/A in each layer from remote measurements. A theoretical approach of the propagation of a finite amplitude acoustic wave in a multilayered medium is developed. In the frame of the KZK equation, the weak nonlinearity of the media, attenuation and diffraction effects are computed for the fundamental and second harmonic waves propagating back and forth in each of the layers of the system. The model uses a gaussian expansion to describe the beam propagation in order to quantitatively evaluate the contribution of each part of the system (layers and interfaces) to its nonlinearity. The model is validated through measurements on a water/aluminum/water system. Transmission as well as reflection configurations are studied. Good agreement is found between the theoretical results and the experimental data. The analysis of the second harmonic field sources measured by the transducers from outside the stratified medium highlights the factors that favor the cumulative effects. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.
2016-12-01
The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.
Kinetics and equilibrium of solute diffusion into human hair.
Wang, Liming; Chen, Longjian; Han, Lujia; Lian, Guoping
2012-12-01
The uptake kinetics of five molecules by hair has been measured and the effects of pH and physical chemical properties of molecules were investigated. A theoretical model is proposed to analyze the experimental data. The results indicate that the binding affinity of solute to hair, as characterized by hair-water partition coefficient, scales to the hydrophobicity of the solute and decreases dramatically as the pH increases to the dissociation constant. The effective diffusion coefficient of solute depended not only on the molecular size as most previous studies suggested, but also on the binding affinity as well as solute dissociation. It appears that the uptake of molecules by hair is due to both hydrophobic interaction and ionic charge interaction. Based on theoretical considerations of the cellular structure, composition and physical chemical properties of hair, quantitative-structure-property-relationships (QSPR) have been proposed to predict the hair-water partition coefficient (PC) and the effective diffusion coefficient (D (e)) of solute. The proposed QSPR models fit well with the experimental data. This paper could be taken as a reference for investigating the adsorption properties for polymeric materials, fibres, and biomaterials.
Surface potential extraction from electrostatic and Kelvin-probe force microscopy images
NASA Astrophysics Data System (ADS)
Xu, Jie; Chen, Deyuan; Li, Wei; Xu, Jun
2018-05-01
A comprehensive comparison study of electrostatic force microscopy (EFM) and Kelvin probe force microscopy (KPFM) is conducted in this manuscript. First, it is theoretically demonstrated that for metallic or semiconductor samples, both the EFM and KPFM signals are a convolution of the sample surface potential with their respective transfer functions. Then, an equivalent point-mass model describing cantilever deflection under distributed loads is developed to reevaluate the cantilever influence on detection signals, and it is shown that the cantilever has no influence on the EFM signal, while it will affect the KPFM signal intensity but not change the resolution. Finally, EFM and KPFM experiments are carried out, and the surface potential is extracted from the EFM and KPFM images by deconvolution processing, respectively. The extracted potential intensity is well consistent with each other and the detection resolution also complies with the theoretical analysis. Our work is helpful to perform a quantitative analysis of EFM and KPFM signals, and the developed point-mass model can also be used for other cantilever beam deflection problems.
Social cycling and conditional responses in the Rock-Paper-Scissors game
Wang, Zhijian; Xu, Bin; Zhou, Hai-Jun
2014-01-01
How humans make decisions in non-cooperative strategic interactions is a big question. For the fundamental Rock-Paper-Scissors (RPS) model game system, classic Nash equilibrium (NE) theory predicts that players randomize completely their action choices to avoid being exploited, while evolutionary game theory of bounded rationality in general predicts persistent cyclic motions, especially in finite populations. However as empirical studies have been relatively sparse, it is still a controversial issue as to which theoretical framework is more appropriate to describe decision-making of human subjects. Here we observe population-level persistent cyclic motions in a laboratory experiment of the discrete-time iterated RPS game under the traditional random pairwise-matching protocol. This collective behavior contradicts with the NE theory but is quantitatively explained, without any adjustable parameter, by a microscopic model of win-lose-tie conditional response. Theoretical calculations suggest that if all players adopt the same optimized conditional response strategy, their accumulated payoff will be much higher than the reference value of the NE mixed strategy. Our work demonstrates the feasibility of understanding human competition behaviors from the angle of non-equilibrium statistical physics. PMID:25060115
Shutin, Dmitriy; Zlobinskaya, Olga
2010-02-01
The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Coubard, Olivier A
2016-01-01
Since the seminal report by Shapiro that bilateral stimulation induces cognitive and emotional changes, 26 years of basic and clinical research have examined the effects of Eye Movement Desensitization and Reprocessing (EMDR) in anxiety disorders, particularly in post-traumatic stress disorder (PTSD). The present article aims at better understanding EMDR neural mechanism. I first review procedural aspects of EMDR protocol and theoretical hypothesis about EMDR effects, and develop the reasons why the scientific community is still divided about EMDR. I then slide from psychology to physiology describing eye movements/emotion interaction from the physiological viewpoint, and introduce theoretical and technical tools used in movement research to re-examine EMDR neural mechanism. Using a recent physiological model for the neuropsychological architecture of motor and cognitive control, the Threshold Interval Modulation with Early Release-Rate of rIse Deviation with Early Release (TIMER-RIDER)-model, I explore how attentional control and bilateral stimulation may participate to EMDR effects. These effects may be obtained by two processes acting in parallel: (i) activity level enhancement of attentional control component; and (ii) bilateral stimulation in any sensorimotor modality, both resulting in lower inhibition enabling dysfunctional information to be processed and anxiety to be reduced. The TIMER-RIDER model offers quantitative predictions about EMDR effects for future research about its underlying physiological mechanisms.
Risbrough, Victoria B; Glenn, Daniel E; Baker, Dewleen G
The use of quantitative, laboratory-based measures of threat in humans for proof-of-concept studies and target development for novel drug discovery has grown tremendously in the last 2 decades. In particular, in the field of posttraumatic stress disorder (PTSD), human models of fear conditioning have been critical in shaping our theoretical understanding of fear processes and importantly, validating findings from animal models of the neural substrates and signaling pathways required for these complex processes. Here, we will review the use of laboratory-based measures of fear processes in humans including cued and contextual conditioning, generalization, extinction, reconsolidation, and reinstatement to develop novel drug treatments for PTSD. We will primarily focus on recent advances in using behavioral and physiological measures of fear, discussing their sensitivity as biobehavioral markers of PTSD symptoms, their response to known and novel PTSD treatments, and in the case of d-cycloserine, how well these findings have translated to outcomes in clinical trials. We will highlight some gaps in the literature and needs for future research, discuss benefits and limitations of these outcome measures in designing proof-of-concept trials, and offer practical guidelines on design and interpretation when using these fear models for drug discovery.
NASA Astrophysics Data System (ADS)
Poussot-Vassal, Charles; Tanelli, Mara; Lovera, Marco
The complexity of Information Technology (IT) systems is steadily increasing and system complexity has been recognised as the main obstacle to further advancements of IT. This fact has recently raised energy management issues. Control techniques have been proposed and successfully applied to design Autonomic Computing systems, trading-off system performance with energy saving goals. As users behaviour is highly time varying and workload conditions can change substantially within the same business day, the Linear Parametrically Varying (LPV) framework is particularly promising for modeling such systems. In this chapter, a control-theoretic method to investigate the trade-off between Quality of Service (QoS) requirements and energy saving objectives in the case of admission control in Web service systems is proposed, considering as control variables the server CPU frequency and the admission probability. To quantitatively evaluate the trade-off, a dynamic model of the admission control dynamics is estimated via LPV identification techniques. Based on this model, an optimisation problem within the Model Predictive Control (MPC) framework is setup, by means of which it is possible to investigate the optimal trade-off policy to manage QoS and energy saving objectives at design time and taking into explicit account the system dynamics.
Castillo-Garit, Juan Alberto; del Toro-Cortés, Oremia; Vega, Maria C; Rolón, Miriam; Rojas de Arias, Antonieta; Casañola-Martin, Gerardo M; Escario, José A; Gómez-Barrio, Alicia; Marrero-Ponce, Yovani; Torrens, Francisco; Abad, Concepción
2015-01-01
Two-dimensional bond-based bilinear indices and linear discriminant analysis are used in this report to perform a quantitative structure-activity relationship study to identify new trypanosomicidal compounds. A data set of 440 organic chemicals, 143 with antitrypanosomal activity and 297 having other clinical uses, is used to develop the theoretical models. Two discriminant models, computed using bond-based bilinear indices, are developed and both show accuracies higher than 86% for training and test sets. The stochastic model correctly indentifies nine out of eleven compounds of a set of organic chemicals obtained from our synthetic collaborators. The in vitro antitrypanosomal activity of this set against epimastigote forms of Trypanosoma cruzi is assayed. Both models show a good agreement between theoretical predictions and experimental results. Three compounds showed IC50 values for epimastigote elimination (AE) lower than 50 μM, while for the benznidazole the IC50 = 54.7 μM which was used as reference compound. The value of IC50 for cytotoxicity of these compounds is at least 5 times greater than their value of IC50 for AE. Finally, we can say that, the present algorithm constitutes a step forward in the search for efficient ways of discovering new antitrypanosomal compounds. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Kim, Young Kwan; Kameo, Yoshitaka; Tanaka, Sakae; Adachi, Taiji
2017-10-01
To understand Wolff's law, bone adaptation by remodeling at the cellular and tissue levels has been discussed extensively through experimental and simulation studies. For the clinical application of a bone remodeling simulation, it is significant to establish a macroscopic model that incorporates clarified microscopic mechanisms. In this study, we proposed novel macroscopic models based on the microscopic mechanism of osteocytic mechanosensing, in which the flow of fluid in the lacuno-canalicular porosity generated by fluid pressure gradients plays an important role, and theoretically evaluated the proposed models, taking biological rationales of bone adaptation into account. The proposed models were categorized into two groups according to whether the remodeling equilibrium state was defined globally or locally, i.e., the global or local uniformity models. Each remodeling stimulus in the proposed models was quantitatively evaluated through image-based finite element analyses of a swine cancellous bone, according to two introduced criteria associated with the trabecular volume and orientation at remodeling equilibrium based on biological rationales. The evaluation suggested that nonuniformity of the mean stress gradient in the local uniformity model, one of the proposed stimuli, has high validity. Furthermore, the adaptive potential of each stimulus was discussed based on spatial distribution of a remodeling stimulus on the trabecular surface. The theoretical consideration of a remodeling stimulus based on biological rationales of bone adaptation would contribute to the establishment of a clinically applicable and reliable simulation model of bone remodeling.
NASA Astrophysics Data System (ADS)
Riest, Jonas; Nägele, Gerhard; Liu, Yun; Wagner, Norman J.; Godfrin, P. Douglas
2018-02-01
Recently, atypical static features of microstructural ordering in low-salinity lysozyme protein solutions have been extensively explored experimentally and explained theoretically based on a short-range attractive plus long-range repulsive (SALR) interaction potential. However, the protein dynamics and the relationship to the atypical SALR structure remain to be demonstrated. Here, the applicability of semi-analytic theoretical methods predicting diffusion properties and viscosity in isotropic particle suspensions to low-salinity lysozyme protein solutions is tested. Using the interaction potential parameters previously obtained from static structure factor measurements, our results of Monte Carlo simulations representing seven experimental lysoyzme samples indicate that they exist either in dispersed fluid or random percolated states. The self-consistent Zerah-Hansen scheme is used to describe the static structure factor, S(q), which is the input to our calculation schemes for the short-time hydrodynamic function, H(q), and the zero-frequency viscosity η. The schemes account for hydrodynamic interactions included on an approximate level. Theoretical predictions for H(q) as a function of the wavenumber q quantitatively agree with experimental results at small protein concentrations obtained using neutron spin echo measurements. At higher concentrations, qualitative agreement is preserved although the calculated hydrodynamic functions are overestimated. We attribute the differences for higher concentrations and lower temperatures to translational-rotational diffusion coupling induced by the shape and interaction anisotropy of particles and clusters, patchiness of the lysozyme particle surfaces, and the intra-cluster dynamics, features not included in our simple globular particle model. The theoretical results for the solution viscosity, η, are in qualitative agreement with our experimental data even at higher concentrations. We demonstrate that semi-quantitative predictions of diffusion properties and viscosity of solutions of globular proteins are possible given only the equilibrium structure factor of proteins. Furthermore, we explore the effects of changing the attraction strength on H(q) and η.
Manson, Joseph H.; Gervais, Matthew M.; Fessler, Daniel M. T.; Kline, Michelle A.
2014-01-01
The determinants of conversational dominance are not well understood. We used videotaped triadic interactions among unacquainted same-sex American college students to test predictions drawn from the theoretical distinction between dominance and prestige as modes of human status competition. Specifically, we investigated the effects of physical formidability, facial attractiveness, social status, and self-reported subclinical psychopathy on quantitative (proportion of words produced), participatory (interruptions produced and sustained), and sequential (topic control) dominance. No measure of physical formidability or attractiveness was associated with any form of conversational dominance, suggesting that the characteristics of our study population or experimental frame may have moderated their role in dominance dynamics. Primary psychopathy was positively associated with quantitative dominance and (marginally) overall triad talkativeness, and negatively associated (in men) with affect word use, whereas secondary psychopathy was unrelated to conversational dominance. The two psychopathy factors had significant opposing effects on quantitative dominance in a multivariate model. These latter findings suggest that glibness in primary psychopathy may function to elicit exploitable information from others in a relationally mobile society. PMID:25426962
Manson, Joseph H; Gervais, Matthew M; Fessler, Daniel M T; Kline, Michelle A
2014-01-01
The determinants of conversational dominance are not well understood. We used videotaped triadic interactions among unacquainted same-sex American college students to test predictions drawn from the theoretical distinction between dominance and prestige as modes of human status competition. Specifically, we investigated the effects of physical formidability, facial attractiveness, social status, and self-reported subclinical psychopathy on quantitative (proportion of words produced), participatory (interruptions produced and sustained), and sequential (topic control) dominance. No measure of physical formidability or attractiveness was associated with any form of conversational dominance, suggesting that the characteristics of our study population or experimental frame may have moderated their role in dominance dynamics. Primary psychopathy was positively associated with quantitative dominance and (marginally) overall triad talkativeness, and negatively associated (in men) with affect word use, whereas secondary psychopathy was unrelated to conversational dominance. The two psychopathy factors had significant opposing effects on quantitative dominance in a multivariate model. These latter findings suggest that glibness in primary psychopathy may function to elicit exploitable information from others in a relationally mobile society.
Resource Letter MPCVW-1: Modeling Political Conflict, Violence, and Wars: A Survey
NASA Astrophysics Data System (ADS)
Morgenstern, Ana P.; Velásquez, Nicolás; Manrique, Pedro; Hong, Qi; Johnson, Nicholas; Johnson, Neil
2013-11-01
This Resource Letter provides a guide into the literature on modeling and explaining political conflict, violence, and wars. Although this literature is dominated by social scientists, multidisciplinary work is currently being developed in the wake of myriad methodological approaches that have sought to analyze and predict political violence. The works covered herein present an overview of this abundance of methodological approaches. Since there is a variety of possible data sets and theoretical approaches, the level of detail and scope of models can vary quite considerably. The review does not provide a summary of the available data sets, but instead highlights recent works on quantitative or multi-method approaches to modeling different forms of political violence. Journal articles and books are organized in the following topics: social movements, diffusion of social movements, political violence, insurgencies and terrorism, and civil wars.
Models of Jovian decametric radiation. [astronomical models of decametric waves
NASA Technical Reports Server (NTRS)
Smith, R. A.
1975-01-01
A critical review is presented of theoretical models of Jovian decametric radiation, with particular emphasis on the Io-modulated emission. The problem is divided into three broad aspects: (1) the mechanism coupling Io's orbital motion to the inner exosphere, (2) the consequent instability mechanism by which electromagnetic waves are amplified, and (3) the subsequent propagation of the waves in the source region and the Jovian plasmasphere. At present there exists no comprehensive theory that treats all of these aspects quantitatively within a single framework. Acceleration of particles by plasma sheaths near Io is proposed as an explanation for the coupling mechanism, while most of the properties of the emission may be explained in the context of cyclotron instability of a highly anisotropic distribution of streaming particles.
The subtle business of model reduction for stochastic chemical kinetics
NASA Astrophysics Data System (ADS)
Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.
2009-02-01
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
A rational account of pedagogical reasoning: teaching by, and learning from, examples.
Shafto, Patrick; Goodman, Noah D; Griffiths, Thomas L
2014-06-01
Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning. Copyright © 2014 Elsevier Inc. All rights reserved.
Pollard, Thomas D
2014-12-02
This review illustrates the value of quantitative information including concentrations, kinetic constants and equilibrium constants in modeling and simulating complex biological processes. Although much has been learned about some biological systems without these parameter values, they greatly strengthen mechanistic accounts of dynamical systems. The analysis of muscle contraction is a classic example of the value of combining an inventory of the molecules, atomic structures of the molecules, kinetic constants for the reactions, reconstitutions with purified proteins and theoretical modeling to account for the contraction of whole muscles. A similar strategy is now being used to understand the mechanism of cytokinesis using fission yeast as a favorable model system. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
The subtle business of model reduction for stochastic chemical kinetics.
Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R
2009-02-14
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
Mitotic wavefronts mediated by mechanical signaling in early Drosophila embryos
NASA Astrophysics Data System (ADS)
Kang, Louis; Idema, Timon; Liu, Andrea; Lubensky, Tom
2013-03-01
Mitosis in the early Drosophila embryo demonstrates spatial and temporal correlations in the form of wavefronts that travel across the embryo in each cell cycle. This coordinated phenomenon requires a signaling mechanism, which we suggest is mechanical in origin. We have constructed a theoretical model that supports nonlinear wavefront propagation in a mechanically-excitable medium. Previously, we have shown that this model captures quantitatively the wavefront speed as it varies with cell cycle number, for reasonable values of the elastic moduli and damping coefficient of the medium. Now we show that our model also captures the displacements of cell nuclei in the embryo in response to the traveling wavefront. This new result further supports that mechanical signaling may play an important role in mediating mitotic wavefronts.
Analysis of the cooling of continuous flow helium cryostats
NASA Astrophysics Data System (ADS)
Pust, L.
A mathematical model of the cooling of a continuous-flow cryostat which takes into account real values of the specific and latent heat of the cryogenic fluid and of the specific heat of the cryostat material is presented. The amount of liquid in the cooling fluid and four parasitic heat flows, caused by radiation and heat conduction in the construction materials and in the rest gas in the vacuum insulation, are also taken into account. The influence of different model parameters on performance, particularly in the non-stationary regime, is demonstrated by means of numerical solutions of the modelling equations. A quantitative criterion which assesses the properties of the planned cryostat, is formulated. The theoretical conclusions are compared with measurements performed on a continuous flow helium cryostat.
The effect of solute concentration on hindered gradient diffusion in polymeric gels
NASA Astrophysics Data System (ADS)
Buck, Kristan K. S.; Dungan, Stephanie R.; Phillips, Ronald J.
1999-10-01
The effect of solute concentration on hindered diffusion of sphere-like colloidal solutes in stiff polymer hydrogels is examined theoretically and experimentally. In the theoretical development, it is shown that the presence of the gel fibres enhances the effect of concentration on the thermodynamic driving force for gradient diffusion, while simultaneously reducing the effect of concentration on the hydrodynamic drag. The result is that gradient diffusion depends more strongly on solute concentration in gels than it does in pure solution, by an amount that depends on the partition coefficient and hydraulic permeability of the gel solute system. Quantitative calculations are made to determine the concentration-dependent diffusivity correct to first order in solute concentration. In order to compare the theoretical predictions with experimental data, rates of diffusion have been measured for nonionic micelles and globular proteins in solution and agarose hydrogels at two gel concentrations. The measurements were performed by using holographic interferometry, through which one monitors changes in refractive index as gradient diffusion takes place within a transparent gel. If the solutes are modelled as spheres with short-range repulsive interactions, then the experimentally measured concentration dependence of the diffusivities of both the protein and micelles is in good agreement with the theoretical predictions.
Genetic constraints on adaptation: a theoretical primer for the genomics era.
Connallon, Tim; Hall, Matthew D
2018-06-01
Genetic constraints are features of inheritance systems that slow or prohibit adaptation. Several population genetic mechanisms of constraint have received sustained attention within the field since they were first articulated in the early 20th century. This attention is now reflected in a rich, and still growing, theoretical literature on the genetic limits to adaptive change. In turn, empirical research on constraints has seen a rapid expansion over the last two decades in response to changing interests of evolutionary biologists, along with new technologies, expanding data sets, and creative analytical approaches that blend mathematical modeling with genomics. Indeed, one of the most notable and exciting features of recent progress in genetic constraints is the close connection between theoretical and empirical research. In this review, we discuss five major population genetic contexts of genetic constraint: genetic dominance, pleiotropy, fitness trade-offs between types of individuals of a population, sign epistasis, and genetic linkage between loci. For each, we outline historical antecedents of the theory, specific contexts where constraints manifest, and their quantitative consequences for adaptation. From each of these theoretical foundations, we discuss recent empirical approaches for identifying and characterizing genetic constraints, each grounded and motivated by this theory, and outline promising areas for future work. © 2018 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Wang, Yu; Wang, Min; Jiang, Jingfeng
2017-02-01
Shear wave elastography is increasingly being used to non-invasively stage liver fibrosis by measuring shear wave speed (SWS). This study quantitatively investigates intrinsic variations among SWS measurements obtained from heterogeneous media such as fibrotic livers. More specifically, it aims to demonstrate that intrinsic variations in SWS measurements, in general, follow a non-Gaussian distribution and are related to the heterogeneous nature of the medium being measured. Using the principle of maximum entropy (ME), our primary objective is to derive a probability density function (PDF) of the SWS distribution in conjunction with a lossless stochastic tissue model. Our secondary objective is to evaluate the performance of the proposed PDF using Monte Carlo (MC)-simulated shear wave (SW) data against three other commonly used PDFs. Based on statistical evaluation criteria, initial results showed that the derived PDF fits better to MC-simulated SWS data than the other three PDFs. It was also found that SW fronts stabilized after a short (compared with the SW wavelength) travel distance in lossless media. Furthermore, in lossless media, the distance required to stabilize the SW propagation was not correlated to the SW wavelength at the low frequencies investigated (i.e. 50, 100 and 150 Hz). Examination of the MC simulation data suggests that elastic (shear) wave scattering became more pronounced when the volume fraction of hard inclusions increased from 10 to 30%. In conclusion, using the principle of ME, we theoretically demonstrated for the first time that SWS measurements in this model follow a non-Gaussian distribution. Preliminary data indicated that the proposed PDF can quantitatively represent intrinsic variations in SWS measurements simulated using a two-phase random medium model. The advantages of the proposed PDF are its physically meaningful parameters and solid theoretical basis.
Intramolecular Long-Distance Electron Transfer in Organic Molecules
NASA Astrophysics Data System (ADS)
Closs, Gerhard L.; Miller, John R.
1988-04-01
Intramolecular long-distance electron transfer (ET) has been actively studied in recent years in order to test existing theories in a quantitative way and to provide the necessary constants for predicting ET rates from simple structural parameters. Theoretical predictions of an ``inverted region,'' where increasing the driving force of the reaction will decrease its rate, have begun to be experimentally confirmed. A predicted nonlinear dependence of ET rates on the polarity of the solvent has also been confirmed. This work has implications for the design of efficient photochemical charge-separation devices. Other studies have been directed toward determining the distance dependence of ET reactions. Model studies on different series of compounds give similar distance dependences. When different stereochemical structures are compared, it becomes apparent that geometrical factors must be taken into account. Finally, the mechanism of coupling between donor and acceptor in weakly interacting systems has become of major importance. The theoretical and experimental evidence favors a model in which coupling is provided by the interaction with the orbitals of the intervening molecular fragments, although more experimental evidence is needed. Studies on intramolecular ET in organic model compounds have established that current theories give an adequate description of the process. The separation of electronic from nuclear coordinates is only a convenient approximation applied to many models, but in long-distance ET it works remarkably well. It is particularly gratifying to see Marcus' ideas finally confirmed after three decades of skepticism. By obtaining the numbers for quantitative correlations between rates and distances, these experiments have shown that saturated hydrocarbon fragments can ``conduct'' electrons over tens of angstroms. A dramatic demonstration of this fact has recently been obtained by tunneling electron microscopy on Langmuir-Blodgett films, showing in a pictorial fashion that electrons prefer to travel from cathode to anode through the fatty-acid chains (46).
The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning
Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.
2017-01-01
Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359
Blank, Hartmut
2005-02-01
Traditionally, the causes of interference phenomena were sought in "real" or "hard" memory processes such as unlearning, response competition, or inhibition, which serve to reduce the accessibility of target items. I propose an alternative approach which does not deny the influence of such processes but highlights a second, equally important, source of interference-the conversion (Tulving, 1983) of accessible memory information into memory performance. Conversion is conceived as a problem-solving-like activity in which the rememberer tries to find solutions to a memory task. Conversion-based interference effects are traced to different conversion processes in the experimental and control conditions of interference designs. I present a simple theoretical model that quantitatively predicts the resulting amount of interference. In two paired-associate learning experiments using two different types of memory tests, these predictions were corroborated. Relations of the present approach to traditional accounts of interference phenomena and implications for eyewitness testimony are discussed.
Fourier transform of delayed fluorescence as an indicator of herbicide concentration.
Guo, Ya; Tan, Jinglu
2014-12-21
It is well known that delayed fluorescence (DF) from Photosystem II (PSII) of plant leaves can be potentially used to sense herbicide pollution and evaluate the effect of herbicides on plant leaves. The research of using DF as a measure of herbicides in the literature was mainly conducted in time domain and qualitative correlation was often obtained. Fourier transform is often used to analyze signals. Viewing DF signal in frequency domain through Fourier transform may allow separation of signal components and provide a quantitative method for sensing herbicides. However, there is a lack of an attempt to use Fourier transform of DF as an indicator of herbicide. In this work, the relationship between the Fourier transform of DF and herbicide concentration was theoretically modelled and analyzed, which immediately yielded a quantitative method to measure herbicide concentration in frequency domain. Experiments were performed to validate the developed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
An approach to computing direction relations between separated object groups
NASA Astrophysics Data System (ADS)
Yan, H.; Wang, Z.; Li, J.
2013-06-01
Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.
An approach to computing direction relations between separated object groups
NASA Astrophysics Data System (ADS)
Yan, H.; Wang, Z.; Li, J.
2013-09-01
Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.
NASA Astrophysics Data System (ADS)
Bamba, Motoaki; Ogawa, Tetsuo
2016-03-01
We investigate theoretically the light amplification by stimulated emission of radiation (laser) in the ultrastrong light-matter interaction regime under the two-level and single-mode approximations. The conventional picture of the laser is broken under the ultrastrong interaction. Instead, we must explicitly discuss the dynamics of the electric field and of the magnetic one distinctively, which make the "laser" qualitatively different from the conventional laser. We found that the laser generally accompanies odd-order harmonics of the electromagnetic fields both inside and outside the cavity and a synchronization with an oscillation of atomic population. A bistability is also demonstrated. However, since our model is quite simplified, we got quantitatively different results from the Hamiltonians in the velocity and length forms of the light-matter interaction, while the appearance of the multiple harmonics and the bistability is qualitatively reliable.
Connor, Kevin; Magee, Brian
2014-10-01
This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNL’s focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterialsmore » or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.« less
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
A quantitative model of honey bee colony population dynamics.
Khoury, David S; Myerscough, Mary R; Barron, Andrew B
2011-04-18
Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.
Nonlinear-programming mathematical modeling of coal blending for power plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Longhua; Zhou Junhu; Yao Qiang
At present most of the blending works are guided by experience or linear-programming (LP) which can not reflect the coal complicated characteristics properly. Experimental and theoretical research work shows that most of the coal blend properties can not always be measured as a linear function of the properties of the individual coals in the blend. The authors introduced nonlinear functions or processes (including neural network and fuzzy mathematics), established on the experiments directed by the authors and other researchers, to quantitatively describe the complex coal blend parameters. Finally nonlinear-programming (NLP) mathematical modeling of coal blend is introduced and utilized inmore » the Hangzhou Coal Blending Center. Predictions based on the new method resulted in different results from the ones based on LP modeling. The authors concludes that it is very important to introduce NLP modeling, instead of NL modeling, into the work of coal blending.« less
Advanced quantitative measurement methodology in physics education research
NASA Astrophysics Data System (ADS)
Wang, Jing
The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and novice understanding. Quantitative assessment is an important area in PER. Developing research-based effective assessment instruments and making meaningful inferences based on these instruments have always been important goals of the PER community. Quantitative studies are often conducted to provide bases for test development and result interpretation. Statistics are frequently used in quantitative studies. The selection of statistical methods and interpretation of the results obtained by these methods shall be connected to the education background. In this connecting process, the issues of educational models are often raised. Many widely used statistical methods do not make assumptions on the mental structure of subjects, nor do they provide explanations tailored to the educational audience. There are also other methods that consider the mental structure and are tailored to provide strong connections between statistics and education. These methods often involve model assumption and parameter estimation, and are complicated mathematically. The dissertation provides a practical view of some advanced quantitative assessment methods. The common feature of these methods is that they all make educational/psychological model assumptions beyond the minimum mathematical model. The purpose of the study is to provide a comparison between these advanced methods and the pure mathematical methods. The comparison is based on the performance of the two types of methods under physics education settings. In particular, the comparison uses both physics content assessments and scientific ability assessments. The dissertation includes three parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The reasoning ability structures for U.S. and Chinese students at different educational levels are given by the analysis. A final discussion on the advanced quantitative assessment methodology and the pure mathematical methodology is presented at the end.
Ling, Gilbert N.
1970-01-01
A theoretical equation is presented for the control of cooperative adsorption on proteins and other linear macromolecules by hormones, drugs, ATP, and other „cardinal adsorbents.” With reasonable accuracy, this equation describes quantitatively the control of oxygen binding to hemoglobin by 2,3-diphosphoglycerate and by inosine hexaphosphate. PMID:5272319
Chen , Y; Yan, B; Chalovich, J M; Brenner, B
2001-01-01
It was previously shown that a one-dimensional Ising model could successfully simulate the equilibrium binding of myosin S1 to regulated actin filaments (T. L. Hill, E. Eisenberg and L. Greene, Proc. Natl. Acad. Sci. U.S.A. 77:3186-3190, 1980). However, the time course of myosin S1 binding to regulated actin was thought to be incompatible with this model, and a three-state model was subsequently developed (D. F. McKillop and M. A. Geeves, Biophys. J. 65:693-701, 1993). A quantitative analysis of the predicted time course of myosin S1 binding to regulated actin, however, was never done for either model. Here we present the procedure for the theoretical evaluation of the time course of myosin S1 binding for both models and then show that 1) the Hill model can predict the "lag" in the binding of myosin S1 to regulated actin that is observed in the absence of Ca++ when S1 is in excess of actin, and 2) both models generate very similar families of binding curves when [S1]/[actin] is varied. This result shows that, just based on the equilibrium and pre-steady-state kinetic binding data alone, it is not possible to differentiate between the two models. Thus, the model of Hill et al. cannot be ruled out on the basis of existing pre-steady-state and equilibrium binding data. Physical mechanisms underlying the generation of the lag in the Hill model are discussed. PMID:11325734
Lumped parametric model of the human ear for sound transmission.
Feng, Bin; Gan, Rong Z
2004-09-01
A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
Bayram, Jamil D; Zuabi, Shawki
2012-04-01
The interaction between the acute medical consequences of a Multiple Casualty Event (MCE) and the total medical capacity of the community affected determines if the event amounts to an acute medical disaster. There is a need for a comprehensive quantitative model in MCE that would account for both prehospital and hospital-based acute medical systems, leading to the quantification of acute medical disasters. Such a proposed model needs to be flexible enough in its application to accommodate a priori estimation as part of the decision-making process and a posteriori evaluation for total quality management purposes. The concept proposed by de Boer et al in 1989, along with the disaster metrics quantitative models proposed by Bayram et al on hospital surge capacity and prehospital medical response, were used as theoretical frameworks for a new comprehensive model, taking into account both prehospital and hospital systems, in order to quantify acute medical disasters. A quantitative model called the Acute Medical Severity Index (AMSI) was developed. AMSI is the proportion of the Acute Medical Burden (AMB) resulting from the event, compared to the Total Medical Capacity (TMC) of the community affected; AMSI = AMB/TMC. In this model, AMB is defined as the sum of critical (T1) and moderate (T2) casualties caused by the event, while TMC is a function of the Total Hospital Capacity (THC) and the medical rescue factor (R) accounting for the hospital-based and prehospital medical systems, respectively. Qualitatively, the authors define acute medical disaster as "a state after any type of Multiple Casualty Event where the Acute Medical Burden (AMB) exceeds the Total Medical Capacity (TMC) of the community affected." Quantitatively, an acute medical disaster has an AMSI value of more than one (AMB / TMC > 1). An acute medical incident has an AMSI value of less than one, without the need for medical surge. An acute medical emergency has an AMSI value of less than one with utilization of surge capacity (prehospital or hospital-based). An acute medical crisis has an AMSI value between 0.9 and 1, approaching the threshold for an actual medical disaster. A novel quantitative taxonomy in MCE has been proposed by modeling the Acute Medical Severity Index (AMSI). This model accounts for both hospital and prehospital systems, and quantifies acute medical disasters. Prospective applications of various components of this model are encouraged to further verify its applicability and validity.
Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.
1987-01-01
The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.
Critical behavior of subcellular density organization during neutrophil activation and migration.
Baker-Groberg, Sandra M; Phillips, Kevin G; Healy, Laura D; Itakura, Asako; Porter, Juliana E; Newton, Paul K; Nan, Xiaolin; McCarty, Owen J T
2015-12-01
Physical theories of active matter continue to provide a quantitative understanding of dynamic cellular phenomena, including cell locomotion. Although various investigations of the rheology of cells have identified important viscoelastic and traction force parameters for use in these theoretical approaches, a key variable has remained elusive both in theoretical and experimental approaches: the spatiotemporal behavior of the subcellular density. The evolution of the subcellular density has been qualitatively observed for decades as it provides the source of image contrast in label-free imaging modalities (e.g., differential interference contrast, phase contrast) used to investigate cellular specimens. While these modalities directly visualize cell structure, they do not provide quantitative access to the structures being visualized. We present an established quantitative imaging approach, non-interferometric quantitative phase microscopy, to elucidate the subcellular density dynamics in neutrophils undergoing chemokinesis following uniform bacterial peptide stimulation. Through this approach, we identify a power law dependence of the neutrophil mean density on time with a critical point, suggesting a critical density is required for motility on 2D substrates. Next we elucidate a continuum law relating mean cell density, area, and total mass that is conserved during neutrophil polarization and migration. Together, our approach and quantitative findings will enable investigators to define the physics coupling cytoskeletal dynamics with subcellular density dynamics during cell migration.
Critical behavior of subcellular density organization during neutrophil activation and migration
Baker-Groberg, Sandra M.; Phillips, Kevin G.; Healy, Laura D.; Itakura, Asako; Porter, Juliana E.; Newton, Paul K.; Nan, Xiaolin; McCarty, Owen J.T.
2015-01-01
Physical theories of active matter continue to provide a quantitative understanding of dynamic cellular phenomena, including cell locomotion. Although various investigations of the rheology of cells have identified important viscoelastic and traction force parameters for use in these theoretical approaches, a key variable has remained elusive both in theoretical and experimental approaches: the spatiotemporal behavior of the subcellular density. The evolution of the subcellular density has been qualitatively observed for decades as it provides the source of image contrast in label-free imaging modalities (e.g., differential interference contrast, phase contrast) used to investigate cellular specimens. While these modalities directly visualize cell structure, they do not provide quantitative access to the structures being visualized. We present an established quantitative imaging approach, non-interferometric quantitative phase microscopy, to elucidate the subcellular density dynamics in neutrophils undergoing chemokinesis following uniform bacterial peptide stimulation. Through this approach, we identify a power law dependence of the neutrophil mean density on time with a critical point, suggesting a critical density is required for motility on 2D substrates. Next we elucidate a continuum law relating mean cell density, area, and total mass that is conserved during neutrophil polarization and migration. Together, our approach and quantitative findings will enable investigators to define the physics coupling cytoskeletal dynamics with subcellular density dynamics during cell migration. PMID:26640599
ERIC Educational Resources Information Center
Smeyers, Paul
2008-01-01
Generally educational research is grounded in the empirical traditions of the social sciences (commonly called quantitative and qualitative methods) and is as such distinguished from other forms of scholarship such as theoretical, conceptual or methodological essays, critiques of research traditions and practices and those studies grounded in the…
ERIC Educational Resources Information Center
Yilmaz, Kaya
2013-01-01
There has been much discussion about quantitative and qualitative approaches to research in different disciplines. In the behavioural and social sciences, these two paradigms are compared to reveal their relative strengths and weaknesses. But the debate about both traditions has commonly taken place in academic books. It is hard to find an article…
ERIC Educational Resources Information Center
Balarabe Kura, Sulaiman Y.
2012-01-01
There is a germane relationship between qualitative and quantitative approaches to social science research. The relationship is empirically and theoretically demonstrated by poverty researchers. The study of poverty, as argued in this article, is a study of both numbers and contextualities. This article provides a general overview of qualitative…
Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz
2015-10-06
In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Compound analysis via graph kernels incorporating chirality.
Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya
2010-12-01
High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.
Enhanced optoelastic interaction range in liquid crystals with negative dielectric anisotropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simoni, F.; Lalli, S.; Lucchetti, L.
2014-01-06
We demonstrate that the long-range interaction between surface-functionalized microparticles immersed a nematic liquid crystal—a “nematic colloid”—and a laser-induced “ghost colloid” can be enhanced by a low-voltage quasistatic electric field when the nematic mesophase has a negative dielectric anisotropy. The optoelastic trapping distance is shown to be enhanced by a factor up to 2.5 in presence of an electric field. Experimental data are quantitatively described with a theoretical model accounting for the spatial overlap between the orientational distortions around the microparticle and those induced by the trapping light beam itself.
Equatorial waves in the stratosphere of Uranus
NASA Technical Reports Server (NTRS)
Hinson, David P.; Magalhaes, Julio A.
1991-01-01
Analyses of radio occultation data from Voyager 2 have led to the discovery and characterization of an equatorial wave in the Uranus stratosphere. The observed quasi-periodic vertical atmospheric density variations are in close agreement with theoretical predictions for a wave that propagates vertically through the observed background structure of the stratosphere. Quantitative comparisons between measurements obtained at immersion and at emersion yielded constraints on the meridional and zonal structure of the wave; the fact that the two sets of measurements are correlated suggests a wave of planetary scale. Two equatorial wave models are proposed for the wave.
Self-diffusion in compressively strained Ge
NASA Astrophysics Data System (ADS)
Kawamura, Yoko; Uematsu, Masashi; Hoshi, Yusuke; Sawano, Kentarou; Myronov, Maksym; Shiraki, Yasuhiro; Haller, Eugene E.; Itoh, Kohei M.
2011-08-01
Under a compressive biaxial strain of ˜ 0.71%, Ge self-diffusion has been measured using an isotopically controlled Ge single-crystal layer grown on a relaxed Si0.2Ge0.8 virtual substrate. The self-diffusivity is enhanced by the compressive strain and its behavior is fully consistent with a theoretical prediction of a generalized activation volume model of a simple vacancy mediated diffusion, reported by Aziz et al. [Phys. Rev. B 73, 054101 (2006)]. The activation volume of (-0.65±0.21) times the Ge atomic volume quantitatively describes the observed enhancement due to the compressive biaxial strain very well.
The challenge of identifying greenhouse gas-induced climatic change
NASA Technical Reports Server (NTRS)
Maccracken, Michael C.
1992-01-01
Meeting the challenge of identifying greenhouse gas-induced climatic change involves three steps. First, observations of critical variables must be assembled, evaluated, and analyzed to determine that there has been a statistically significant change. Second, reliable theoretical (model) calculations must be conducted to provide a definitive set of changes for which to search. Third, a quantitative and statistically significant association must be made between the projected and observed changes to exclude the possibility that the changes are due to natural variability or other factors. This paper provides a qualitative overview of scientific progress in successfully fulfilling these three steps.
Benefits, barriers, and cues to action of yoga practice: a focus group approach.
Atkinson, Nancy L; Permuth-Levine, Rachel
2009-01-01
To explore perceived benefits, barriers, and cues to action of yoga practice among adults. Focus groups were conducted with persons who had never practiced yoga, practitioners of one year or less, and practitioners for more than one year. The Health Belief Model was the theoretical foundation of inquiry. All participants acknowledged a variety of benefits of yoga. Barriers outweighed benefits among persons who had never practiced despite knowledge of benefits. Positive experiences with yoga and yoga instructors facilitated practice. Newly identified benefits and barriers indicate the need for quantitative research and behavioral trials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokaras, D.; Andrianis, M.; Lagoyannis, A.
The cascade L-shell x-ray emission as an incident polarized and unpolarized monochromatic radiation overpass the 1s ionization threshold is investigated for the metallic Fe by means of moderate resolution, quantitative x-ray spectrometry. A full ab initio theoretical investigation of the L-shell x-ray emission processes is performed based on a detailed straightforward construction of the cascade decay trees within the Pauli-Fock approximation. The agreement obtained between experiments and the presented theory is indicated and discussed with respect to the accuracy of advanced atomic models as well as its significance for the characterization capabilities of x-ray fluorescence (XRF) analysis.
Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André
2011-01-01
Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.
Study of the Electronic Surface States of III-V Compounds and Silicon.
1981-03-31
region in metal/Si interfaces is thus at most a quantitative , with increasing intermixing going from Ag/Si, Cu/Si, Ni/Si, PdSi to Au/Si. This...to- At the present time, the above argument on cross sections noise ratio better than 102. can not be put in a completely quantitative way since the...of the intensity (0 - 23 in Fig. 2) when the system search effort (also theoretical) is made for a more quantitative becomes richer in the metal
Theoretical foundations for a quantitative approach to paleogenetics. I, II.
NASA Technical Reports Server (NTRS)
Holmquist, R.
1972-01-01
It is shown that by neglecting the phenomena of multiple hits, back mutation, and chance coincidence errors larger than 100% can be introduced in the calculated value of the average number of nucleotide base differences to be expected between two homologous polynucleotides. Mathematical formulas are derived to correct quantitatively for these effects. It is pointed out that the effects change materially the quantitative aspects of phylogenics, such as the length of the legs of the trees. A number of problems are solved without approximation.-
Phylogenetic Properties of RNA Viruses
Pompei, Simone; Loreto, Vittorio; Tria, Francesca
2012-01-01
A new word, phylodynamics, was coined to emphasize the interconnection between phylogenetic properties, as observed for instance in a phylogenetic tree, and the epidemic dynamics of viruses, where selection, mediated by the host immune response, and transmission play a crucial role. The challenges faced when investigating the evolution of RNA viruses call for a virtuous loop of data collection, data analysis and modeling. This already resulted both in the collection of massive sequences databases and in the formulation of hypotheses on the main mechanisms driving qualitative differences observed in the (reconstructed) evolutionary patterns of different RNA viruses. Qualitatively, it has been observed that selection driven by the host immune response induces an uneven survival ability among co-existing strains. As a consequence, the imbalance level of the phylogenetic tree is manifestly more pronounced if compared to the case when the interaction with the host immune system does not play a central role in the evolutive dynamics. While many imbalance metrics have been introduced, reliable methods to discriminate in a quantitative way different level of imbalance are still lacking. In our work, we reconstruct and analyze the phylogenetic trees of six RNA viruses, with a special emphasis on the human Influenza A virus, due to its relevance for vaccine preparation as well as for the theoretical challenges it poses due to its peculiar evolutionary dynamics. We focus in particular on topological properties. We point out the limitation featured by standard imbalance metrics, and we introduce a new methodology with which we assign the correct imbalance level of the phylogenetic trees, in agreement with the phylodynamics of the viruses. Our thorough quantitative analysis allows for a deeper understanding of the evolutionary dynamics of the considered RNA viruses, which is crucial in order to provide a valuable framework for a quantitative assessment of theoretical predictions. PMID:23028645
Control of Mechanotransduction by Molecular Clutch Dynamics.
Elosegui-Artola, Alberto; Trepat, Xavier; Roca-Cusachs, Pere
2018-05-01
The linkage of cells to their microenvironment is mediated by a series of bonds that dynamically engage and disengage, in what has been conceptualized as the molecular clutch model. Whereas this model has long been employed to describe actin cytoskeleton and cell migration dynamics, it has recently been proposed to also explain mechanotransduction (i.e., the process by which cells convert mechanical signals from their environment into biochemical signals). Here we review the current understanding on how cell dynamics and mechanotransduction are driven by molecular clutch dynamics and its master regulator, the force loading rate. Throughout this Review, we place a specific emphasis on the quantitative prediction of cell response enabled by combined experimental and theoretical approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Effect of the Density Ratio on the Nonlinear Dynamics of the Unstable Fluid Interface
NASA Technical Reports Server (NTRS)
Abarzhi, S. I.
2003-01-01
Here we report multiple harmonic theoretical solutions for a complete system of conservation laws, which describe the large-scale coherent dynamics in RTI and RMI for fluids with a finite density ratio in the general three-dimensional case. The analysis yields new properties of the bubble front dynamics. In either RTI or RMI, the obtained dependencies of the bubble velocity and curvature on the density ratio differ qualitatively and quantitatively from those suggested by the models of Sharp (1984), Oron et al. (2001), and Goncharov (2002). We show explicitly that these models violate the conservation laws. For the first time, our theory reveals an important qualitative distinction between the dynamics of the RT and RM bubbles.
Data analysis and theoretical studies for atmospheric Explorer C, D and E
NASA Technical Reports Server (NTRS)
Dalgarno, A.
1983-01-01
The research concentrated on construction of a comprehensive model of the chemistry of the ionosphere. It proceeded by comparing detailed predictions of the atmospheric parameters observed by the instrumentation on board the Atmospheric Explorer Satellites with the measured values and modifying the chemistry to bring about consistency. Full account was taken of laboratory measurements of the processes identified as important. The research programs were made available to the AE team members. Regularly updated tables of recommended values of photoionization cross sections and electron impact excitation and ionization cross sections were provided. The research did indeed lead to a chemistry model in which the main pathways are quantitatively secure. The accuracy was sufficient that remaining differences are small.
Commentary on factors affecting transverse vibration using an idealized theoretical equation
Joseph F. Murphy
2000-01-01
An idealized theoretical equation to calculate flexural stiffness using transverse vibration of a simply end-supported beam is being considered by the American Society of Testing and Materials (ASTM) Wood Committee D07 to determine lumber modulus of elasticity. This commentary provides the user a quantitative view of six factors that affect the accuracy of using the...
Jarnuczak, Andrew F; Eyers, Claire E; Schwartz, Jean-Marc; Grant, Christopher M; Hubbard, Simon J
2015-09-01
Molecular chaperones play an important role in protein homeostasis and the cellular response to stress. In particular, the HSP70 chaperones in yeast mediate a large volume of protein folding through transient associations with their substrates. This chaperone interaction network can be disturbed by various perturbations, such as environmental stress or a gene deletion. Here, we consider deletions of two major chaperone proteins, SSA1 and SSB1, from the chaperone network in Sacchromyces cerevisiae. We employ a SILAC-based approach to examine changes in global and local protein abundance and rationalise our results via network analysis and graph theoretical approaches. Although the deletions result in an overall increase in intracellular protein content, correlated with an increase in cell size, this is not matched by substantial changes in individual protein concentrations. Despite the phenotypic robustness to deletion of these major hub proteins, it cannot be simply explained by the presence of paralogues. Instead, network analysis and a theoretical consideration of folding workload suggest that the robustness to perturbation is a product of the overall network structure. This highlights how quantitative proteomics and systems modelling can be used to rationalise emergent network properties, and how the HSP70 system can accommodate the loss of major hubs. © 2015 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Eubank, Philip T.; Patel, Mukund R.; Barrufet, Maria A.; Bozkurt, Bedri
1993-06-01
A variable mass, cylindrical plasma model (VMCPM) is developed for sparks created by electrical discharge in a liquid media. The model consist of three differential equations—one each from fluid dynamics, an energy balance, and the radiation equation—combined with a plasma equation of state. A thermophysical property subroutine allows realistic estimation of plasma enthalpy, mass density, and particle fractions by inclusion of the heats of dissociation and ionization for a plasma created from deionized water. Problems with the zero-time boundary conditions are overcome by an electron balance procedure. Numerical solution of the model provides plasma radius, temperature, pressure, and mass as a function of pulse time for fixed current, electrode gap, and power fraction remaining in the plasma. Moderately high temperatures (≳5000 K) and pressures (≳4 bar) persist in the sparks even after long pulse times (to ˜500 μs). Quantitative proof that superheating is the dominant mechanism for electrical discharge machining (EDM) erosion is thus provided for the first time. Some quantitative inconsistencies developed between our (1) cathode, (2) anode, and (3) plasma models (this series) are discussed with indication as to how they will be rectified in a fourth article to follow shortly in this journal. While containing oversimplifications, these three models are believed to contain the respective dominant physics of the EDM process but need be brought into numerical consistency for each time increment of the numerical solution.
A control theoretic model of driver steering behavior
NASA Technical Reports Server (NTRS)
Donges, E.
1977-01-01
A quantitative description of driver steering behavior such as a mathematical model is presented. The steering task is divided into two levels: (1) the guidance level involving the perception of the instantaneous and future course of the forcing function provided by the forward view of the road, and the response to it in an anticipatory open-loop control mode; (2) the stabilization level whereby any occuring deviations from the forcing function are compensated for in a closed-loop control mode. This concept of the duality of the driver's steering activity led to a newly developed two-level model of driver steering behavior. Its parameters are identified on the basis of data measured in driving simulator experiments. The parameter estimates of both levels of the model show significant dependence on the experimental situation which can be characterized by variables such as vehicle speed and desired path curvature.
Computational modeling of in vitro biological responses on polymethacrylate surfaces
Ghosh, Jayeeta; Lewitus, Dan Y; Chandra, Prafulla; Joy, Abraham; Bushman, Jared; Knight, Doyle; Kohn, Joachim
2011-01-01
The objective of this research was to examine the capabilities of QSPR (Quantitative Structure Property Relationship) modeling to predict specific biological responses (fibrinogen adsorption, cell attachment and cell proliferation index) on thin films of different polymethacrylates. Using 33 commercially available monomers it is theoretically possible to construct a library of over 40,000 distinct polymer compositions. A subset of these polymers were synthesized and solvent cast surfaces were prepared in 96 well plates for the measurement of fibrinogen adsorption. NIH 3T3 cell attachment and proliferation index were measured on spin coated thin films of these polymers. Based on the experimental results of these polymers, separate models were built for homo-, co-, and terpolymers in the library with good correlation between experiment and predicted values. The ability to predict biological responses by simple QSPR models for large numbers of polymers has important implications in designing biomaterials for specific biological or medical applications. PMID:21779132
Signal and noise modeling in confocal laser scanning fluorescence microscopy.
Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til
2012-01-01
Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.
Modeling Human Dynamics of Face-to-Face Interaction Networks
NASA Astrophysics Data System (ADS)
Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo
2013-04-01
Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.
NASA Astrophysics Data System (ADS)
Zhai, Mengting; Chen, Yan; Li, Jing; Zhou, Jun
2017-12-01
The molecular electrongativity distance vector (MEDV-13) was used to describe the molecular structure of benzyl ether diamidine derivatives in this paper, Based on MEDV-13, The three-parameter (M 3, M 15, M 47) QSAR model of insecticidal activity (pIC 50) for 60 benzyl ether diamidine derivatives was constructed by leaps-and-bounds regression (LBR) . The traditional correlation coefficient (R) and the cross-validation correlation coefficient (R CV ) were 0.975 and 0.971, respectively. The robustness of the regression model was validated by Jackknife method, the correlation coefficient R were between 0.971 and 0.983. Meanwhile, the independent variables in the model were tested to be no autocorrelation. The regression results indicate that the model has good robust and predictive capabilities. The research would provide theoretical guidance for the development of new generation of anti African trypanosomiasis drugs with efficiency and low toxicity.
Thermoplastic matrix composite processing model
NASA Technical Reports Server (NTRS)
Dara, P. H.; Loos, A. C.
1985-01-01
The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
Experimental characterization of wingtip vortices in the near field using smoke flow visualizations
NASA Astrophysics Data System (ADS)
Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.
2016-08-01
In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.
NASA Astrophysics Data System (ADS)
Wang, Y. Z.; Wang, B.; Xiong, X. M.; Zhang, J. X.
2011-03-01
In many previous research work associated with studying the deformation of the fluid interface interacting with a solid, the theoretical calculation of the surface energy density on the deformed fluid interface (or its interaction surface pressure) is often approximately obtained by using the expression for the interaction energy per unit area (or pressure) between two parallel macroscopic plates, e.g. σ(D) = - A / 12 πD2or π(D) = - A / 6 πD3for the van der Waals (vdW) interaction, through invoking the Derjaguin approximation (DA). This approximation however would result in over- or even inaccurate-prediction of the interaction force and the corresponding deformation of the fluid interface due to the invalidation of Derjaguin approximation in cases of microscopic or submacroscopic solids. To circumvent the above limitations existing in the previous DA-based theoretical work, a more accurate and quantitative theoretical model, available for exactly calculating the vdW-induced deformation of a planar fluid interface interacting with a sphere, and the interaction forces taking into account its change, is presented in this paper. The validity and advantage of the new mathematical and physical technique is rigorously verified by comparison with the numerical results on basis of the previous Paraboloid solid (PS) model and the Hamaker's sphere-flat expression (viz. F = - 2 Aa3 / (3 D2( D + 2 a) 2)), as well as its well-known DA-based general form of F / a = - A / 6z p02.
Isotropic differential phase contrast microscopy for quantitative phase bio-imaging.
Chen, Hsi-Hsun; Lin, Yu-Zi; Luo, Yuan
2018-05-16
Quantitative phase imaging (QPI) has been investigated to retrieve optical phase information of an object and applied to biological microscopy and related medical studies. In recent examples, differential phase contrast (DPC) microscopy can recover phase image of thin sample under multi-axis intensity measurements in wide-field scheme. Unlike conventional DPC, based on theoretical approach under partially coherent condition, we propose a new method to achieve isotropic differential phase contrast (iDPC) with high accuracy and stability for phase recovery in simple and high-speed fashion. The iDPC is simply implemented with a partially coherent microscopy and a programmable thin-film transistor (TFT) shield to digitally modulate structured illumination patterns for QPI. In this article, simulation results show consistency of our theoretical approach for iDPC under partial coherence. In addition, we further demonstrate experiments of quantitative phase images of a standard micro-lens array, as well as label-free live human cell samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Eddy, Nnabuk O; Ita, Benedict I
2011-02-01
Experimental aspects of the inhibition of the corrosion of mild steel in HCl solutions by some carbozones were studied using gravimetric, thermometric and gasometric methods, while a theoretical study was carried out using density functional theory, a quantitative structure-activity relation, and quantum chemical principles. The results obtained indicated that the studied carbozones are good adsorption inhibitors for the corrosion of mild steel in HCl. The inhibition efficiencies of the studied carbozones were found to increase with increasing concentration of the respective inhibitor. A strong correlation was found between the average inhibition efficiency and some quantum chemical parameters, and also between the experimental and theoretical inhibition efficiencies (obtained from the quantitative structure-activity relation).
Joseph, Paul; Tretsiakova-McNally, Svetlana
2015-01-01
Polymeric materials often exhibit complex combustion behaviours encompassing several stages and involving solid phase, gas phase and interphase. A wide range of qualitative, semi-quantitative and quantitative testing techniques are currently available, both at the laboratory scale and for commercial purposes, for evaluating the decomposition and combustion behaviours of polymeric materials. They include, but are not limited to, techniques such as: thermo-gravimetric analysis (TGA), oxygen bomb calorimetry, limiting oxygen index measurements (LOI), Underwriters Laboratory 94 (UL-94) tests, cone calorimetry, etc. However, none of the above mentioned techniques are capable of quantitatively deciphering the underpinning physiochemical processes leading to the melt flow behaviour of thermoplastics. Melt-flow of polymeric materials can constitute a serious secondary hazard in fire scenarios, for example, if they are present as component parts of a ceiling in an enclosure. In recent years, more quantitative attempts to measure the mass loss and melt-drip behaviour of some commercially important chain- and step-growth polymers have been accomplished. The present article focuses, primarily, on the experimental and some theoretical aspects of melt-flow behaviours of thermoplastics under heat/fire conditions. PMID:28793746
Joseph, Paul; Tretsiakova-McNally, Svetlana
2015-12-15
Polymeric materials often exhibit complex combustion behaviours encompassing several stages and involving solid phase, gas phase and interphase. A wide range of qualitative, semi-quantitative and quantitative testing techniques are currently available, both at the laboratory scale and for commercial purposes, for evaluating the decomposition and combustion behaviours of polymeric materials. They include, but are not limited to, techniques such as: thermo-gravimetric analysis (TGA), oxygen bomb calorimetry, limiting oxygen index measurements (LOI), Underwriters Laboratory 94 (UL-94) tests, cone calorimetry, etc. However, none of the above mentioned techniques are capable of quantitatively deciphering the underpinning physiochemical processes leading to the melt flow behaviour of thermoplastics. Melt-flow of polymeric materials can constitute a serious secondary hazard in fire scenarios, for example, if they are present as component parts of a ceiling in an enclosure. In recent years, more quantitative attempts to measure the mass loss and melt-drip behaviour of some commercially important chain- and step-growth polymers have been accomplished. The present article focuses, primarily, on the experimental and some theoretical aspects of melt-flow behaviours of thermoplastics under heat/fire conditions.
Spurdle, Amanda B
2010-06-01
Multifactorial models developed for BRCA1/2 variant classification have proved very useful for delineating BRCA1/2 variants associated with very high risk of cancer, or with little clinical significance. Recent linkage of this quantitative assessment of risk to clinical management guidelines has provided a basis to standardize variant reporting, variant classification and management of families with such variants, and can theoretically be applied to any disease gene. As proof of principle, the multifactorial approach already shows great promise for application to the evaluation of mismatch repair gene variants identified in families with suspected Lynch syndrome. However there is need to be cautious of the noted limitations and caveats of the current model, some of which may be exacerbated by differences in ascertainment and biological pathways to disease for different cancer syndromes.
Models for the Immediate Environment of Ions in Aqueous Solutions of Neodymium Chloride
NASA Astrophysics Data System (ADS)
Smirnov, P. R.; Grechin, O. V.
2018-01-01
Radial distribution functions of neodymium chloride aqueous solutions in a wide range of concentrations under ambient conditions are calculated from experimental data obtained earlier via X-ray diffraction analysis. Different models of the structural organization of the system are developed. The optimum versions are determined by calculating theoretical functions for each model and comparing their fit to the experimental functions. Such quantitative characteristics of the immediate environment of Nd3+ and Cl- ions as coordination numbers, interparticle distances, and varieties of ion pairs are determined. It is shown that the average number of water molecules in the first coordination sphere of the cation falls from 9 to 6.2 as the concentration rises. The structure of the systems over the whole range of concentrations is determined by ion associates of the noncontact type.
Kalinowska, Barbara; Banach, Mateusz; Konieczny, Leszek; Marchewka, Damian; Roterman, Irena
2014-01-01
This work discusses the role of unstructured polypeptide chain fragments in shaping the protein's hydrophobic core. Based on the "fuzzy oil drop" model, which assumes an idealized distribution of hydrophobicity density described by the 3D Gaussian, we can determine which fragments make up the core and pinpoint residues whose location conflicts with theoretical predictions. We show that the structural influence of the water environment determines the positions of disordered fragments, leading to the formation of a hydrophobic core overlaid by a hydrophilic mantle. This phenomenon is further described by studying selected proteins which are known to be unstable and contain intrinsically disordered fragments. Their properties are established quantitatively, explaining the causative relation between the protein's structure and function and facilitating further comparative analyses of various structural models. © 2014 Elsevier Inc. All rights reserved.
QSAR Analysis of 2-Amino or 2-Methyl-1-Substituted Benzimidazoles Against Pseudomonas aeruginosa
Podunavac-Kuzmanović, Sanja O.; Cvetković, Dragoljub D.; Barna, Dijana J.
2009-01-01
A set of benzimidazole derivatives were tested for their inhibitory activities against the Gram-negative bacterium Pseudomonas aeruginosa and minimum inhibitory concentrations were determined for all the compounds. Quantitative structure activity relationship (QSAR) analysis was applied to fourteen of the abovementioned derivatives using a combination of various physicochemical, steric, electronic, and structural molecular descriptors. A multiple linear regression (MLR) procedure was used to model the relationships between molecular descriptors and the antibacterial activity of the benzimidazole derivatives. The stepwise regression method was used to derive the most significant models as a calibration model for predicting the inhibitory activity of this class of molecules. The best QSAR models were further validated by a leave one out technique as well as by the calculation of statistical parameters for the established theoretical models. To confirm the predictive power of the models, an external set of molecules was used. High agreement between experimental and predicted inhibitory values, obtained in the validation procedure, indicated the good quality of the derived QSAR models. PMID:19468332
Wang, Xin; Gao, Jun; Fan, Zhiguo
2014-02-01
It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox.
NASA Astrophysics Data System (ADS)
Wang, Xin; Gao, Jun; Fan, Zhiguo
2014-02-01
It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox.
Tian, Tongde; Chen, Chuanliang; Yang, Feng; Tang, Jingwen; Pei, Junwen; Shi, Bian; Zhang, Ning; Zhang, Jianhua
2017-03-01
The paper aimed to screen out genetic markers applicable to early diagnosis for colorectal cancer and establish apoptotic regulatory network model for colorectal cancer, and to analyze the current situation of traditional Chinese medicine (TCM) target, thereby providing theoretical evidence for early diagnosis and targeted therapy of colorectal cancer. Taking databases including CNKI, VIP, Wanfang data, Pub Med, and MEDLINE as main sources of literature retrieval, literatures associated with genetic markers that are applied to early diagnosis of colorectal cancer were searched and performed comprehensive and quantitative analysis by Meta analysis, hence screening genetic markers used in early diagnosis of colorectal cancer. KEGG analysis was employed to establish apoptotic regulatory network model based on screened genetic markers, and optimization was conducted on TCM targets. Through Meta analysis, seven genetic markers were screened out, including WWOX, K-ras, COX-2, P53, APC, DCC and PTEN, among which DCC has the highest diagnostic efficiency. Apoptotic regulatory network was built by KEGG analysis. Currently, it was reported that TCM has regulatory function on gene locus in apoptotic regulatory network. The apoptotic regulatory model of colorectal cancer established in this study provides theoretical evidence for early diagnosis and TCM targeted therapy of colorectal cancer in clinic.
A Population Genetics Model of Marker-Assisted Selection
Luo, Z. W.; Thompson, R.; Woolliams, J. A.
1997-01-01
A deterministic two-loci model was developed to predict genetic response to marker-assisted selection (MAS) in one generation and in multiple generations. Formulas were derived to relate linkage disequilibrium in a population to the proportion of additive genetic variance used by MAS, and in turn to an extra improvement in genetic response over phenotypic selection. Predictions of the response were compared to those predicted by using an infinite-loci model and the factors affecting efficiency of MAS were examined. Theoretical analyses of the present study revealed the nonlinearity between the selection intensity and genetic response in MAS. In addition to the heritability of the trait and the proportion of the marker-associated genetic variance, the frequencies of the selectively favorable alleles at the two loci, one marker and one quantitative trait locus, were found to play an important role in determining both the short- and long-term efficiencies of MAS. The evolution of linkage disequilibrium and thus the genetic response over several generations were predicted theoretically and examined by simulation. MAS dissipated the disequilibrium more quickly than drift alone. In some cases studied, the rate of dissipation was as large as that to be expected in the circumstance where the true recombination fraction was increased by three times and selection was absent. PMID:9215918
Preliminary dynamic tests of a flight-type ejector
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1992-01-01
A thrust augmenting ejector was tested to provide experimental data to assist in the assessment of theoretical models to predict duct and ejector fluid-dynamic characteristics. Eleven full-scale thrust augmenting ejector tests were conducted in which a rapid increase in the ejector nozzle pressure ratio was effected through a unique facility, bypass/burst-disk subsystem. The present work examines two cases representative of the test performance window. In the first case, the primary nozzle pressure ration (NPR) increased 36 percent from one unchoked (NPR = 1.29) primary flow condition to another (NPR = 1.75) over a 0.15 second interval. The second case involves choked primary flow conditions, where a 17 percent increase in primary nozzle flowrate (from NPR = 2.35 to NPR = 2.77) occurred over approximately 0.1 seconds. Although the real-time signal measurements support qualitative remarks on ejector performance, extracting quantitative ejector dynamic response was impeded by excessive aerodynamic noise and thrust stand dynamic (resonance) characteristics. It does appear, however, that a quasi-steady performance assumption is valid for this model with primary nozzle pressure increased on the order of 50 lb(sub f)/s. Transient signal treatment of the present dataset is discussed and initial interpretations of the results are compared with theoretical predictions for a similar Short Takeoff and Vertical Landing (STOVL) ejector model.
Ghost features in Doppler-broadened spectra of rovibrational transitions in trapped HD+ ions
NASA Astrophysics Data System (ADS)
Patra, Sayan; Koelemeij, J. C. J.
2017-02-01
Doppler broadening plays an important role in laser rovibrational spectroscopy of trapped deuterated molecular hydrogen ions (HD+), even at the millikelvin temperatures achieved through sympathetic cooling by laser-cooled beryllium ions. Recently, Biesheuvel et al. (2016) presented a theoretical lineshape model for such transitions which not only considers linestrengths and Doppler broadening, but also the finite sample size and population redistribution by blackbody radiation, which are important in view of the long storage and probe times achievable in ion traps. Here, we employ the rate equation model developed by Biesheuvel et al. to theoretically study the Doppler-broadened hyperfine structure of the (v, L) : (0, 3) → (4, 2) rovibrational transition in HD+ at 1442 nm. We observe prominent yet hitherto unrecognized ghost features in the simulated spectrum, whose positions depend on the Doppler width, transition rates, and saturation levels of the hyperfine components addressed by the laser. We explain the origin and behavior of such features, and we provide a simple quantitative guideline to assess whether ghost features may appear. As such ghost features may be common to saturated Doppler-broadened spectra of rotational and vibrational transitions in trapped ions composed of partly overlapping lines, our work illustrates the necessity to use lineshape models that take into account all the relevant physics.
Shear flow of angular grains: acoustic effects and nonmonotonic rate dependence of volume.
Lieou, Charles K C; Elbanna, Ahmed E; Langer, J S; Carlson, J M
2014-09-01
Naturally occurring granular materials often consist of angular particles whose shape and frictional characteristics may have important implications on macroscopic flow rheology. In this paper, we provide a theoretical account for the peculiar phenomenon of autoacoustic compaction-nonmonotonic variation of shear band volume with shear rate in angular particles-recently observed in experiments. Our approach is based on the notion that the volume of a granular material is determined by an effective-disorder temperature known as the compactivity. Noise sources in a driven granular material couple its various degrees of freedom and the environment, causing the flow of entropy between them. The grain-scale dynamics is described by the shear-transformation-zone theory of granular flow, which accounts for irreversible plastic deformation in terms of localized flow defects whose density is governed by the state of configurational disorder. To model the effects of grain shape and frictional characteristics, we propose an Ising-like internal variable to account for nearest-neighbor grain interlocking and geometric frustration and interpret the effect of friction as an acoustic noise strength. We show quantitative agreement between experimental measurements and theoretical predictions and propose additional experiments that provide stringent tests on the new theoretical elements.
NASA Technical Reports Server (NTRS)
Palosz, B.; Grzanka, E.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H. P.; Janik, J. F.; Palosz, W.; Curreri, Peter A. (Technical Monitor)
2002-01-01
The effect of the chemical state of the surface of nanoparticles on the relaxation in the near-surface layer was examined using the concept of the apparent lattice parameter (alp) determined for different diffraction vectors Q. The apparent lattice parameter is a lattice parameter determined either from an individual Bragg reflection, or from a selected region of the diffraction pattern. At low diffraction vectors the Bragg peak positions are affected mainly by the structure of the near-surface layer, while at high Q-values only the interior of the nano-grain contributes to the diffraction pattern. Following the measurements on raw (as prepared) powders we investigated powders cleaned by annealing at 400C under vacuum, and the same powders wetted with water. Theoretical alp-Q plots showed that the structure of the surface layer depends on the sample treatment. Semi-quantitative analysis based on the comparison of the experimental and theoretical alp-Q plots was performed. Theoretical alp-Q relations were obtained from the diffraction patterns calculated for models of nanocrystals with a strained surface layer using the Debye functions.
NASA Astrophysics Data System (ADS)
Flakus, Henryk T.; Michta, Anna
2004-11-01
This paper presents the investigation results of the polarized IR spectra of H1245 imidazole crystals and of D1H245, D1245 and H1D245 imidazole deuterium derivative crystals. The spectra were measured using polarized light at the room temperature and at 77 K by a transmission method, for two different crystalline faces. Theoretical analysis of the results concerned linear dichroic effects, H/D isotopic and temperature effects, observed in the spectra of the hydrogen and of the deuterium bonds in imidazole crystals, at the frequency ranges of νN-H and νN-D bands. The basic crystal spectral properties can be satisfactorily interpreted in a quantitative way for a hydrogen bond linear dimer model. Such a model explains not only a two-branch structure of the νN-H and νN-D bands in crystalline spectra, but also some essential linear dichroic effects in the band frequency ranges, for isotopically diluted crystals. Model calculations, performed within the limits of the strong-coupling model, allowed for quantitative interpretation and for understanding of the basic properties of the hydrogen bond IR spectra of imidazole crystals, H/D isotopic, temperature and dichroic effects included. The results allowed verification of theoretical models proposed recently for the imidazole crystal spectra generation mechanisms. In the scope of our studies, the mechanism of H/D isotopic self-organization processes, taking place in the crystal hydrogen bond lattices, was also recognized. It was proved that for isotopically diluted crystalline samples of imidazole, a non-random distribution of protons and deuterons exclusively occurs in some restricted fragments (domains) of open chains of the hydrogen-bonded molecules. Nevertheless, these co-operative interactions between the hydrogen bonds do not concern adjacent fragments of neighboring hydrogen bond chains in the lattice. Analysis of the isotopic self-organization effects in the spectra of imidazole crystals delivered crucial arguments for understanding of the nature of the hydrogen bond spectra generation mechanisms.
Coubard, Olivier A.
2016-01-01
Since the seminal report by Shapiro that bilateral stimulation induces cognitive and emotional changes, 26 years of basic and clinical research have examined the effects of Eye Movement Desensitization and Reprocessing (EMDR) in anxiety disorders, particularly in post-traumatic stress disorder (PTSD). The present article aims at better understanding EMDR neural mechanism. I first review procedural aspects of EMDR protocol and theoretical hypothesis about EMDR effects, and develop the reasons why the scientific community is still divided about EMDR. I then slide from psychology to physiology describing eye movements/emotion interaction from the physiological viewpoint, and introduce theoretical and technical tools used in movement research to re-examine EMDR neural mechanism. Using a recent physiological model for the neuropsychological architecture of motor and cognitive control, the Threshold Interval Modulation with Early Release-Rate of rIse Deviation with Early Release (TIMER-RIDER)—model, I explore how attentional control and bilateral stimulation may participate to EMDR effects. These effects may be obtained by two processes acting in parallel: (i) activity level enhancement of attentional control component; and (ii) bilateral stimulation in any sensorimotor modality, both resulting in lower inhibition enabling dysfunctional information to be processed and anxiety to be reduced. The TIMER-RIDER model offers quantitative predictions about EMDR effects for future research about its underlying physiological mechanisms. PMID:27092064
Quantitative risk assessment of durable glass fibers.
Fayerweather, William E; Eastes, Walter; Cereghini, Francesco; Hadley, John G
2002-06-01
This article presents a quantitative risk assessment for the theoretical lifetime cancer risk from the manufacture and use of relatively durable synthetic glass fibers. More specifically, we estimate levels of exposure to respirable fibers or fiberlike structures of E-glass and C-glass that, assuming a working lifetime exposure, pose a theoretical lifetime cancer risk of not more than 1 per 100,000. For comparability with other risk assessments we define these levels as nonsignificant exposures. Nonsignificant exposure levels are estimated from (a) the Institute of Occupational Medicine (IOM) chronic rat inhalation bioassay of durable E-glass microfibers, and (b) the Research Consulting Company (RCC) chronic inhalation bioassay of durable refractory ceramic fibers (RCF). Best estimates of nonsignificant E-glass exposure exceed 0.05-0.13 fibers (or shards) per cubic centimeter (cm3) when calculated from the multistage nonthreshold model. Best estimates of nonsignificant C-glass exposure exceed 0.27-0.6 fibers/cm3. Estimates of nonsignificant exposure increase markedly for E- and C-glass when non-linear models are applied and rapidly exceed 1 fiber/cm3. Controlling durable fiber exposures to an 8-h time-weighted average of 0.05 fibers/cm3 will assure that the additional theoretical lifetime risk from working lifetime exposures to these durable fibers or shards is kept below the 1 per 100,000 level. Measured airborne exposures to respirable, durable glass fibers (or shards) in glass fiber manufacturing and fabrication operations were compared with the nonsignificant exposure estimates described. Sampling results for B-sized respirable E-glass fibers at facilities that manufacture or fabricate small-diameter continuous-filament products, from those that manufacture respirable E-glass shards from PERG (process to efficiently recycle glass), from milled fiber operations, and from respirable C-glass shards from Flakeglass operations indicate very low median exposures of 0, 0.0002, 0.007, 0.008, and 0.0025 fibers (or shards)/cm3, respectively using the NIOSH 7400 Method ("B" rules). Durable glass fiber exposures for various applications must be well characterized to ensure that they are kept below nonsignificant levels (e.g., 0.05 fibers/cm3) as defined in this risk assessment.
Lande, R
2014-05-01
Quantitative genetic models of evolution of phenotypic plasticity are used to derive environmental tolerance curves for a population in a changing environment, providing a theoretical foundation for integrating physiological and community ecology with evolutionary genetics of plasticity and norms of reaction. Plasticity is modelled for a labile quantitative character undergoing continuous reversible development and selection in a fluctuating environment. If there is no cost of plasticity, a labile character evolves expected plasticity equalling the slope of the optimal phenotype as a function of the environment. This contrasts with previous theory for plasticity influenced by the environment at a critical stage of early development determining a constant adult phenotype on which selection acts, for which the expected plasticity is reduced by the environmental predictability over the discrete time lag between development and selection. With a cost of plasticity in a labile character, the expected plasticity depends on the cost and on the environmental variance and predictability averaged over the continuous developmental time lag. Environmental tolerance curves derived from this model confirm traditional assumptions in physiological ecology and provide new insights. Tolerance curve width increases with larger environmental variance, but can only evolve within a limited range. The strength of the trade-off between tolerance curve height and width depends on the cost of plasticity. Asymmetric tolerance curves caused by male sterility at high temperature are illustrated. A simple condition is given for a large transient increase in plasticity and tolerance curve width following a sudden change in average environment. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism
Fleming, R.M.T.; Thiele, I.; Provan, G.; Nasheuer, H.P.
2010-01-01
The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in E. coli and compare favourably with in silico prediction by flux balance analysis. PMID:20230840