Sample records for quantitative theoretical model

  1. Quantitative force measurements using frequency modulation atomic force microscopy—theoretical foundations

    NASA Astrophysics Data System (ADS)

    Sader, John E.; Uchihashi, Takayuki; Higgins, Michael J.; Farrell, Alan; Nakayama, Yoshikazu; Jarvis, Suzanne P.

    2005-03-01

    Use of the atomic force microscope (AFM) in quantitative force measurements inherently requires a theoretical framework enabling conversion of the observed deflection properties of the cantilever to an interaction force. In this paper, the theoretical foundations of using frequency modulation atomic force microscopy (FM-AFM) in quantitative force measurements are examined and rigorously elucidated, with consideration being given to both 'conservative' and 'dissipative' interactions. This includes a detailed discussion of the underlying assumptions involved in such quantitative force measurements, the presentation of globally valid explicit formulae for evaluation of so-called 'conservative' and 'dissipative' forces, discussion of the origin of these forces, and analysis of the applicability of FM-AFM to quantitative force measurements in liquid.

  2. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges.

    PubMed

    Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.

  3. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges

    PubMed Central

    Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840

  4. Quantitation in chiral capillary electrophoresis: theoretical and practical considerations.

    PubMed

    D'Hulst, A; Verbeke, N

    1994-06-01

    Capillary electrophoresis (CE) represents a decisive step forward in stereoselective analysis. The present paper deals with the theoretical aspects of the quantitation of peak separation in chiral CE. Because peak shape is very different in CE with respect to high performance liquid chromatography (HPLC), the resolution factor Rs, commonly used to describe the extent of separation between enantiomers as well as unrelated compounds, is demonstrated to be of limited value for the assessment of chiral separations in CE. Instead, the conjunct use of a relative chiral separation factor (RCS) and the percent chiral separation (% CS) is advocated. An array of examples is given to illustrate this. The practical aspects of method development using maltodextrins--which have been proposed previously as a major innovation in chiral selectors applicable in CE--are documented with the stereoselective analysis of coumarinic anticoagulant drugs. The possibilities of quantitation using CE were explored under two extreme conditions. Using ibuprofen, it has been demonstrated that enantiomeric excess determinations are possible down to a 1% level of optical contamination and stereoselective determinations are still possible with a good precision near the detection limit, increasing sample load by very long injection times. The theoretical aspects of this possibility are addressed in the discussion.

  5. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  6. Introduction to Theoretical Modelling

    NASA Astrophysics Data System (ADS)

    Davis, Matthew J.; Gardiner, Simon A.; Hanna, Thomas M.; Nygaard, Nicolai; Proukakis, Nick P.; Szymańska, Marzena H.

    2013-02-01

    We briefly overview commonly encountered theoretical notions arising in the modelling of quantum gases, intended to provide a unified background to the `language' and diverse theoretical models presented elsewhere in this book, and aimed particularly at researchers from outside the quantum gases community.

  7. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    ERIC Educational Resources Information Center

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  8. A Theoretical Trombone

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2014-01-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…

  9. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  10. A theoretical trombone

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2014-09-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.

  11. Graph theoretical model of a sensorimotor connectome in zebrafish.

    PubMed

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  12. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.

  13. Theoretical Analysis of an Iron Mineral-Based Magnetoreceptor Model in Birds

    PubMed Central

    Solov'yov, Ilia A.; Greiner, Walter

    2007-01-01

    Sensing the magnetic field has been established as an essential part of navigation and orientation of various animals for many years. Only recently has the first detailed receptor concept for magnetoreception been published based on histological and physical results. The considered mechanism involves two types of iron minerals (magnetite and maghemite) that were found in subcellular compartments within sensory dendrites of the upper beak of several bird species. But so far a quantitative evaluation of the proposed receptor is missing. In this article, we develop a theoretical model to quantitatively and qualitatively describe the magnetic field effects among particles containing iron minerals. The analysis of forces acting between these subcellular compartments shows a particular dependence on the orientation of the external magnetic field. The iron minerals in the beak are found in the form of crystalline maghemite platelets and assemblies of magnetite nanoparticles. We demonstrate that the pull or push to the magnetite assemblies, which are connected to the cell membrane, may reach a value of 0.2 pN—sufficient to excite specific mechanoreceptive membrane channels in the nerve cell. The theoretical analysis of the assumed magnetoreceptor system in the avian beak skin clearly shows that it might indeed be a sensitive biological magnetometer providing an essential part of the magnetic map for navigation. PMID:17496012

  14. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  15. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  16. NMR relaxation induced by iron oxide particles: testing theoretical models.

    PubMed

    Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L

    2016-04-15

    Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.

  17. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    PubMed Central

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  18. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  19. A preliminary theoretical line-blanketed model solar photosphere

    NASA Technical Reports Server (NTRS)

    Kurucz, R. L.

    1974-01-01

    In the theoretical approach to model-atmosphere construction, all opacities are computed theoretically and the temperature-pressure structure is determined by conservation of energy. Until recently, this has not been a very useful method for later type stars, because the line opacity was both poorly known and difficult to calculate. However, methods have now been developed that are capable of representing the line opacity well enough for construction of realistic models. A preliminary theoretical solar model is presented that produces closer agreement with observation than has been heretofore possible. The qualitative advantages and shortcomings of this model are discussued and projected improvements are outlined.

  20. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Theoretical model for plasmonic photothermal response of gold nanostructures solutions

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.

    2018-03-01

    Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.

  2. Establishment of quantitative retention-activity model by optimized microemulsion liquid chromatography.

    PubMed

    Xu, Liyuan; Gao, Haoshi; Li, Liangxing; Li, Yinnong; Wang, Liuyun; Gao, Chongkai; Li, Ning

    2016-12-23

    The effective permeability coefficient is of theoretical and practical importance in evaluation of the bioavailability of drug candidates. However, most methods currently used to measure this coefficient are expensive and time-consuming. In this paper, we addressed these problems by proposing a new measurement method which is based on the microemulsion liquid chromatography. First, the parallel artificial membrane permeability assays model was used to determine the effective permeability of drug so that quantitative retention-activity relationships could be established, which were used to optimize the microemulsion liquid chromatography. The most effective microemulsion system used a mobile phase of 6.0% (w/w) Brij35, 6.6% (w/w) butanol, 0.8% (w/w) octanol, and 86.6% (w/w) phosphate buffer (pH 7.4). Next, support vector machine and back-propagation neural networks are employed to develop a quantitative retention-activity relationships model associated with the optimal microemulsion system, and used to improve the prediction ability. Finally, an adequate correlation between experimental value and predicted value is computed to verify the performance of the optimal model. The results indicate that the microemulsion liquid chromatography can serve as a possible alternative to the PAMPA method for determination of high-throughput permeability and simulation of biological processes. Copyright © 2016. Published by Elsevier B.V.

  3. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  4. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    PubMed

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  5. Hybrid rocket engine, theoretical model and experiment

    NASA Astrophysics Data System (ADS)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  6. QTest: Quantitative Testing of Theories of Binary Choice.

    PubMed

    Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.

  7. QTest: Quantitative Testing of Theories of Binary Choice

    PubMed Central

    Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495

  8. Collective behavior in animal groups: theoretical models and empirical studies

    PubMed Central

    Giardina, Irene

    2008-01-01

    Collective phenomena in animal groups have attracted much attention in the last years, becoming one of the hottest topics in ethology. There are various reasons for this. On the one hand, animal grouping provides a paradigmatic example of self-organization, where collective behavior emerges in absence of centralized control. The mechanism of group formation, where local rules for the individuals lead to a coherent global state, is very general and transcends the detailed nature of its components. In this respect, collective animal behavior is a subject of great interdisciplinary interest. On the other hand, there are several important issues related to the biological function of grouping and its evolutionary success. Research in this field boasts a number of theoretical models, but much less empirical results to compare with. For this reason, even if the general mechanisms through which self-organization is achieved are qualitatively well understood, a quantitative test of the models assumptions is still lacking. New analysis on large groups, which require sophisticated technological procedures, can provide the necessary empirical data. PMID:19404431

  9. A Detection-Theoretic Model of Echo Inhibition

    ERIC Educational Resources Information Center

    Saberi, Kourosh; Petrosyan, Agavni

    2004-01-01

    A detection-theoretic analysis of the auditory localization of dual-impulse stimuli is described, and a model for the processing of spatial cues in the echo pulse is developed. Although for over 50 years "echo suppression" has been the topic of intense theoretical and empirical study within the hearing sciences, only a rudimentary understanding of…

  10. Theoretical Modeling and Electromagnetic Response of Complex Metamaterials

    DTIC Science & Technology

    2017-03-06

    AFRL-AFOSR-VA-TR-2017-0042 Theoretical Modeling and Electromagnetic Response of Complex Metamaterials Andrea Alu UNIVERSITY OF TEXAS AT AUSTIN Final...Nov 2016 4. TITLE AND SUBTITLE Theoretical Modeling and Electromagnetic Response of Complex Metamaterials 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...based on parity-time symmetric metasurfaces, and various advances in electromagnetic and acoustic theory and applications. Our findings have opened

  11. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  12. A theoretical quantitative genetic study of negative ecological interactions and extinction times in changing environments.

    PubMed

    Jones, Adam G

    2008-04-25

    Rapid human-induced changes in the environment at local, regional and global scales appear to be contributing to population declines and extinctions, resulting in an unprecedented biodiversity crisis. Although in the short term populations can respond ecologically to environmental alterations, in the face of persistent change populations must evolve or become extinct. Existing models of evolution and extinction in changing environments focus only on single species, even though the dynamics of extinction almost certainly depend upon the nature of species interactions. Here, I use a model of quantitative trait evolution in a two-species community to show that negative ecological interactions, such as predation and competition, can produce unexpected results regarding time to extinction. Under some circumstances, negative interactions can be expected to hasten the extinction of species declining in numbers. However, under other circumstances, negative interactions can actually increase times to extinction. This effect occurs across a wide range of parameter values and can be substantial, in some cases allowing a population to persist for 40 percent longer than it would in the absence of the species interaction. This theoretical study indicates that negative species interactions can have unexpected positive effects on times to extinction. Consequently, detailed studies of selection and demographics will be necessary to predict the consequences of species interactions in changing environments for any particular ecological community.

  13. Theoretical models of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Hawkings, D. L.

    1978-01-01

    For low speed rotors, it is shown that unsteady load models are only partially successful in predicting experimental levels. A theoretical model is presented which leads to the concept of unsteady thickness noise. This gives better agreement with test results. For high speed rotors, it is argued that present models are incomplete and that other mechanisms are at work. Some possibilities are briefly discussed.

  14. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  16. Theoretical Models, Assessment Frameworks and Test Construction.

    ERIC Educational Resources Information Center

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  17. Guidelines for a graph-theoretic implementation of structural equation modeling

    USGS Publications Warehouse

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for

  18. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  19. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    ERIC Educational Resources Information Center

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  20. Clusters of DNA induced by ionizing radiation: formation of short DNA fragments. I. Theoretical modeling

    NASA Technical Reports Server (NTRS)

    Holley, W. R.; Chatterjee, A.

    1996-01-01

    We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber comprised of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and delta rays due to knock-on collisions involving energy transfers >100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of OH, H, eaq, etc.; (2) OH attack on sugar molecules leading to strand breaks: (3) OH attack on bases; (4) direct ionization of the sugar molecules leading to strand breaks; (5) direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 bp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. The shapes of the spectra of DNA fragment lengths depend on the symmetries or approximate symmetries of the chromatin structure. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper (B. Rydberg, Radiat, Res. 145, 200-209, 1996) after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the

  1. Empathy and child neglect: a theoretical model.

    PubMed

    De Paul, Joaquín; Guibert, María

    2008-11-01

    To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect is considered as a specific type of non-helping behavior. The central hypothesis of the theoretical model presented here suggests that neglectful parents cannot develop the helping response set to care for their children because the observation of a child's signal of need does not lead to the experience of emotions that motivate helping or because the parents experience these emotions, but specific cognitions modify the motivation to help. The present theoretical model suggests that different typologies of neglectful parents could be developed based on different reasons that parents might not to experience emotions that motivate helping behaviors. The model can be helpful to promote new empirical studies about the etiology of different groups of neglectful families.

  2. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  3. A Quantitative Theoretical Framework For Protein-Induced Fluorescence Enhancement-Förster-Type Resonance Energy Transfer (PIFE-FRET).

    PubMed

    Lerner, Eitan; Ploetz, Evelyn; Hohlbein, Johannes; Cordes, Thorben; Weiss, Shimon

    2016-07-07

    Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein-DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems.

  4. A Quantitative Approach to Assessing System Evolvability

    NASA Technical Reports Server (NTRS)

    Christian, John A., III

    2004-01-01

    When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.

  5. Testing a Theoretical Model of Immigration Transition and Physical Activity.

    PubMed

    Chang, Sun Ju; Im, Eun-Ok

    2015-01-01

    The purposes of the study were to develop a theoretical model to explain the relationships between immigration transition and midlife women's physical activity and test the relationships among the major variables of the model. A theoretical model, which was developed based on transitions theory and the midlife women's attitudes toward physical activity theory, consists of 4 major variables, including length of stay in the United States, country of birth, level of acculturation, and midlife women's physical activity. To test the theoretical model, a secondary analysis with data from 127 Hispanic women and 123 non-Hispanic (NH) Asian women in a national Internet study was used. Among the major variables of the model, length of stay in the United States was negatively associated with physical activity in Hispanic women. Level of acculturation in NH Asian women was positively correlated with women's physical activity. Country of birth and level of acculturation were significant factors that influenced physical activity in both Hispanic and NH Asian women. The findings support the theoretical model that was developed to examine relationships between immigration transition and physical activity; it shows that immigration transition can play an essential role in influencing health behaviors of immigrant populations in the United States. The NH theoretical model can be widely used in nursing practice and research that focus on immigrant women and their health behaviors. Health care providers need to consider the influences of immigration transition to promote immigrant women's physical activity.

  6. Expanding Panjabi's stability model to express movement: a theoretical model.

    PubMed

    Hoffman, J; Gabel, P

    2013-06-01

    Novel theoretical models of movement have historically inspired the creation of new methods for the application of human movement. The landmark theoretical model of spinal stability by Panjabi in 1992 led to the creation of an exercise approach to spinal stability. This approach however was later challenged, most significantly due to a lack of favourable clinical effect. The concepts explored in this paper address and consider the deficiencies of Panjabi's model then propose an evolution and expansion from a special model of stability to a general one of movement. It is proposed that two body-wide symbiotic elements are present within all movement systems, stability and mobility. The justification for this is derived from the observable clinical environment. It is clinically recognised that these two elements are present and identifiable throughout the body in different joints and muscles, and the neural conduction system. In order to generalise the Panjabi model of stability to include and illustrate movement, a matching parallel mobility system with the same subsystems was conceptually created. In this expanded theoretical model, the new mobility system is placed beside the existing stability system and subsystems. The ability of both stability and mobility systems to work in harmony will subsequently determine the quality of movement. Conversely, malfunction of either system, or their subsystems, will deleteriously affect all other subsystems and consequently overall movement quality. For this reason, in the rehabilitation exercise environment, focus should be placed on the simultaneous involvement of both the stability and mobility systems. It is suggested that the individual's relevant functional harmonious movements should be challenged at the highest possible level without pain or discomfort. It is anticipated that this conceptual expansion of the theoretical model of stability to one with the symbiotic inclusion of mobility, will provide new understandings

  7. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  8. More details...
  9. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    PubMed

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples.

  10. Theoretical kinetic studies of models for binding myosin subfragment-1 to regulated actin: Hill model versus Geeves model.

    PubMed Central

    Chen , Y; Yan, B; Chalovich, J M; Brenner, B

    2001-01-01

    It was previously shown that a one-dimensional Ising model could successfully simulate the equilibrium binding of myosin S1 to regulated actin filaments (T. L. Hill, E. Eisenberg and L. Greene, Proc. Natl. Acad. Sci. U.S.A. 77:3186-3190, 1980). However, the time course of myosin S1 binding to regulated actin was thought to be incompatible with this model, and a three-state model was subsequently developed (D. F. McKillop and M. A. Geeves, Biophys. J. 65:693-701, 1993). A quantitative analysis of the predicted time course of myosin S1 binding to regulated actin, however, was never done for either model. Here we present the procedure for the theoretical evaluation of the time course of myosin S1 binding for both models and then show that 1) the Hill model can predict the "lag" in the binding of myosin S1 to regulated actin that is observed in the absence of Ca++ when S1 is in excess of actin, and 2) both models generate very similar families of binding curves when [S1]/[actin] is varied. This result shows that, just based on the equilibrium and pre-steady-state kinetic binding data alone, it is not possible to differentiate between the two models. Thus, the model of Hill et al. cannot be ruled out on the basis of existing pre-steady-state and equilibrium binding data. Physical mechanisms underlying the generation of the lag in the Hill model are discussed. PMID:11325734

  11. The design and testing of a caring teaching model based on the theoretical framework of caring in the Chinese Context: a mixed-method study.

    PubMed

    Guo, Yujie; Shen, Jie; Ye, Xuchun; Chen, Huali; Jiang, Anli

    2013-08-01

    This paper aims to report the design and test the effectiveness of an innovative caring teaching model based on the theoretical framework of caring in the Chinese context. Since the 1970's, caring has been a core value in nursing education. In a previous study, a theoretical framework of caring in the Chinese context is explored employing a grounded theory study, considered beneficial for caring education. A caring teaching model was designed theoretically and a one group pre- and post-test quasi-experimental study was administered to test its effectiveness. From Oct, 2009 to Jul, 2010, a cohort of grade-2 undergraduate nursing students (n=64) in a Chinese medical school was recruited to participate in the study. Data were gathered through quantitative and qualitative methods to evaluate the effectiveness of the caring teaching model. The caring teaching model created an esthetic situation and experiential learning style for teaching caring that was integrated within the curricula. Quantitative data from the quasi-experimental study showed that the post-test scores of each item were higher than those on the pre-test (p<0.01). Thematic analysis of 1220 narratives from students' caring journals and reports of participant class observation revealed two main thematic categories, which reflected, from the students' points of view, the development of student caring character and the impact that the caring teaching model had on this regard. The model could be used as an integrated approach to teach caring in nursing curricula. It would also be beneficial for nursing administrators in cultivating caring nurse practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  13. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  14. Dynamics in Higher Education Politics: A Theoretical Model

    ERIC Educational Resources Information Center

    Kauko, Jaakko

    2013-01-01

    This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…

  15. Quantitative confirmation of diffusion-limited oxidation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-01-01

    Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less

  16. Theoretical Foundation for Weld Modeling

    NASA Technical Reports Server (NTRS)

    Traugott, S.

    1986-01-01

    Differential equations describe physics of tungsten/inert-gas and plasma-arc welding in aluminum. Report collects and describes necessary theoretical foundation upon which numerical welding model is constructed for tungsten/inert gas or plasma-arc welding in aluminum without keyhole. Governing partial differential equations for flow of heat, metal, and current given, together with boundary conditions relevant to welding process. Numerical estimates for relative importance of various phenomena and required properties of 2219 aluminum included

  17. Hybrid quantum teleportation: A theoretical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  18. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  19. EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.

    ERIC Educational Resources Information Center

    Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith

    2002-01-01

    Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)

  20. A Theoretical Model for Predicting Fracture Strength and Critical Flaw Size of the ZrB2-ZrC Composites at High Temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Ruzhuan; Li, Xiaobo; Wang, Jing; Jia, Bi; Li, Weiguo

    2018-06-01

    This work shows a new rational theoretical model for quantitatively predicting fracture strength and critical flaw size of the ZrB2-ZrC composites at different temperatures, which is based on a new proposed temperature dependent fracture surface energy model and the Griffith criterion. The fracture model takes into account the combined effects of temperature and damage terms (surface flaws and internal flaws) with no any fitting parameters. The predictions of fracture strength and critical flaw size of the ZrB2-ZrC composites at high temperatures agree well with experimental data. Then using the theoretical method, the improvement and design of materials are proposed. The proposed model can be used to predict the fracture strength, find the critical flaw and study the effects of microstructures on the fracture mechanism of the ZrB2-ZrC composites at high temperatures, which thus could become a potential convenient, practical and economical technical means for predicting fracture properties and material design.

  21. A simple theoretical model for ⁶³Ni betavoltaic battery.

    PubMed

    Zuo, Guoping; Zhou, Jianliang; Ke, Guotu

    2013-12-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Network-Theoretic Modeling of Fluid Flow

    DTIC Science & Technology

    2015-07-29

    Final Report STIR: Network-Theoretic Modeling of Fluid Flow ARO Grant W911NF-14-1-0386 Program manager: Dr. Samuel Stanton ( August 1, 2014–April 30...Morzyński, M., and Comte , P., “A finite-time thermodynamics of unsteady fluid flows,” Journal of Non-Equilibrium Thermody- namics, Vol. 33, No. 2

  2. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.

  3. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  4. Model-theoretic framework for sensor data fusion

    NASA Astrophysics Data System (ADS)

    Zavoleas, Kyriakos P.; Kokar, Mieczyslaw M.

    1993-09-01

    The main goal of our research in sensory data fusion (SDF) is the development of a systematic approach (a methodology) to designing systems for interpreting sensory information and for reasoning about the situation based upon this information and upon available data bases and knowledge bases. To achieve such a goal, two kinds of subgoals have been set: (1) develop a theoretical framework in which rational design/implementation decisions can be made, and (2) design a prototype SDF system along the lines of the framework. Our initial design of the framework has been described in our previous papers. In this paper we concentrate on the model-theoretic aspects of this framework. We postulate that data are embedded in data models, and information processing mechanisms are embedded in model operators. The paper is devoted to analyzing the classes of model operators and their significance in SDF. We investigate transformation abstraction and fusion operators. A prototype SDF system, fusing data from range and intensity sensors, is presented, exemplifying the structures introduced. Our framework is justified by the fact that it provides modularity, traceability of information flow, and a basis for a specification language for SDF.

  5. Dependence of tropical cyclone development on coriolis parameter: A theoretical model

    NASA Astrophysics Data System (ADS)

    Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda

    2018-03-01

    A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.

  6. Model-Based Linkage Analysis of a Quantitative Trait.

    PubMed

    Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H

    2017-01-01

    Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.

  7. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  8. Graph theoretical modeling of baby brain networks.

    PubMed

    Zhao, Tengda; Xu, Yuehua; He, Yong

    2018-06-12

    The human brain undergoes explosive growth during the prenatal period and the first few postnatal years, establishing an early infrastructure for the later development of behaviors and cognitions. Revealing the developmental rules during the early phrase is essential in understanding the emergence of brain function and the origin of developmental disorders. The graph-theoretical network modeling in combination with multiple neuroimaging probes provides an important research framework to explore early development of the topological wiring and organizational paradigms of the brain. Here, we reviewed studies which employed neuroimaging and graph-theoretical modeling to investigate brain network development from approximately 20 gestational weeks to 2 years of age. Specifically, the structural and functional brain networks have evolved to highly efficient topological architectures in the early stage; where the structural network remains ahead and paves the way for the development of functional network. The brain network develops in a heterogeneous order, from primary to higher-order systems and from a tendency of network segregation to network integration in the prenatal and postnatal periods. The early brain network topologies show abilities in predicting certain cognitive and behavior performance in later life, and their impairments are likely to continue into childhood and even adulthood. These macroscopic topological changes are found to be associated with possible microstructural maturations, such as axonal growth and myelinations. Collectively, this review provides a detailed delineation of the early changes of the baby brains in the graph-theoretical modeling framework, which opens up a new avenue to understand the developmental principles of the connectome. Copyright © 2018. Published by Elsevier Inc.

  9. A review of game-theoretic models of road user behaviour.

    PubMed

    Elvik, Rune

    2014-01-01

    This paper reviews game-theoretic models that have been developed to explain road user behaviour in situations where road users interact with each other. The paper includes the following game-theoretic models: 1.A general model of the interaction between road users and their possible reaction to measures improving safety (behavioural adaptation).2.Choice of vehicle size as a Prisoners’ dilemma game.3.Speed choice as a co-ordination game.4.Speed compliance as a game between drivers and the police.5.Merging into traffic from an acceleration lane as a mixed-strategy game.6.Choice of level of attention in following situations as an evolutionary game.7.Choice of departure time to avoid congestion as variant of a Prisoners’ dilemma game.8.Interaction between cyclists crossing the road and car drivers.9.Dipping headlights at night well ahead of the point when glare becomes noticeable.10.Choice of evasive action in a situation when cars are on collision course. The models reviewed are different in many respects, but a common feature of the models is that they can explain how informal norms of behaviour can develop among road users and be sustained even if these informal norms violate the formal regulations of the traffic code. Game-theoretic models are not applicable to every conceivable interaction between road users or to situations in which road users choose behaviour without interacting with other road users. Nevertheless, it is likely that game-theoretic models can be applied more widely than they have been until now. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  11. A new simple local muscle recovery model and its theoretical and experimental validation.

    PubMed

    Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu

    2015-01-01

    This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.

  12. Quantitative Theoretical and Conceptual Framework Use in Agricultural Education Research

    ERIC Educational Resources Information Center

    Kitchel, Tracy; Ball, Anna L.

    2014-01-01

    The purpose of this philosophical paper was to articulate the disciplinary tenets for consideration when using theory in agricultural education quantitative research. The paper clarified terminology around the concept of theory in social sciences and introduced inaccuracies of theory use in agricultural education quantitative research. Finally,…

  13. Theoretical model for optical properties of porphyrin

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Nga, Do T.; Phan, The-Long; Thanh, Le T. M.; Anh, Chu T.; Bernad, Sophie; Viet, N. A.

    2014-12-01

    We propose a simple model to interpret the optical absorption spectra of porphyrin in different solvents. Our model successfully explains the decrease in the intensity of optical absorption at maxima of increased wavelengths. We also prove the dependence of the intensity and peak positions in the absorption spectra on the environment. The nature of the Soret band is supposed to derive from π plasmon. Our theoretical calculations are consistent with previous experimental studies.

  14. Propagation studies using a theoretical ionosphere model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.K.

    1973-03-01

    The mid-latitude ionospheric and neutral atmospheric models are coupled with an advanced three dimensional ray-tracing pron predicting the wave propagation conditions and to study to what extent the use of theoretical ionospheric models is practical. The Penn State MK 1 ionospheric model, the Mitra--Rowe D-region model, and the Groves' neutral atmospheric model are used throughout ihis work to represent the real electron densities and collision frequencies. The Faraday rotation and differential Doppler velocities from satellites, the propagation modes for long-distance high-frequency propagation, the group delays for each mode, the ionospheric absorption, and the spatial loss are all predicted. (auth) (STAR)

  15. Quantitative analysis of diffusion tensor orientation: theoretical framework.

    PubMed

    Wu, Yu-Chien; Field, Aaron S; Chung, Moo K; Badie, Benham; Alexander, Andrew L

    2004-11-01

    Diffusion-tensor MRI (DT-MRI) yields information about the magnitude, anisotropy, and orientation of water diffusion of brain tissues. Although white matter tractography and eigenvector color maps provide visually appealing displays of white matter tract organization, they do not easily lend themselves to quantitative and statistical analysis. In this study, a set of visual and quantitative tools for the investigation of tensor orientations in the human brain was developed. Visual tools included rose diagrams, which are spherical coordinate histograms of the major eigenvector directions, and 3D scatterplots of the major eigenvector angles. A scatter matrix of major eigenvector directions was used to describe the distribution of major eigenvectors in a defined anatomic region. A measure of eigenvector dispersion was developed to describe the degree of eigenvector coherence in the selected region. These tools were used to evaluate directional organization and the interhemispheric symmetry of DT-MRI data in five healthy human brains and two patients with infiltrative diseases of the white matter tracts. In normal anatomical white matter tracts, a high degree of directional coherence and interhemispheric symmetry was observed. The infiltrative diseases appeared to alter the eigenvector properties of affected white matter tracts, showing decreased eigenvector coherence and interhemispheric symmetry. This novel approach distills the rich, 3D information available from the diffusion tensor into a form that lends itself to quantitative analysis and statistical hypothesis testing. (c) 2004 Wiley-Liss, Inc.

  16. Modeling with Young Students--Quantitative and Qualitative.

    ERIC Educational Resources Information Center

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  17. Development of Nomarski microscopy for quantitative determination of surface topography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J. S.; Gordon, R. L.; Lessor, D. L.

    1979-01-01

    The use of Nomarski differential interference contrast (DIC) microscopy has been extended to provide nondestructive, quantitative analysis of a sample's surface topography. Theoretical modeling has determined the dependence of the image intensity on the microscope's optical components, the sample's optical properties, and the sample's surface orientation relative to the microscope. Results include expressions to allow the inversion of image intensity data to determine sample surface slopes. A commercial Nomarski system has been modified and characterized to allow the evaluation of the optical model. Data have been recorded with smooth, planar samples that verify the theoretical predictions.

  18. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  19. Propagation studies using a theoretical ionosphere model

    NASA Technical Reports Server (NTRS)

    Lee, M.

    1973-01-01

    The mid-latitude ionospheric and neutral atmospheric models are coupled with an advanced three dimensional ray tracing program to see what success would be obtained in predicting the wave propagation conditions and to study to what extent the use of theoretical ionospheric models is practical. The Penn State MK 1 ionospheric model, the Mitra-Rowe D region model, and the Groves' neutral atmospheric model are used throughout this work to represent the real electron densities and collision frequencies. The Faraday rotation and differential Doppler velocities from satellites, the propagation modes for long distance high frequency propagation, the group delays for each mode, the ionospheric absorption, and the spatial loss are all predicted.

  20. Quantitative model validation of manipulative robot systems

    NASA Astrophysics Data System (ADS)

    Kartowisastro, Iman Herwidiana

    This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of

  1. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    PubMed Central

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467

  2. A theoretical model to predict both horizontal displacement and vertical displacement for electromagnetic induction-based deep displacement sensors.

    PubMed

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  3. Testing a theoretical model of clinical nurses' intent to stay.

    PubMed

    Cowden, Tracy L; Cummings, Greta G

    2015-01-01

    Published theoretical models of nurses' intent to stay (ITS) report inconsistent outcomes, and not all hypothesized models have been adequately tested. Research has focused on cognitive rather than emotional determinants of nurses' ITS. The aim of this study was to empirically verify a complex theoretical model of nurses' ITS that includes both affective and cognitive determinants and to explore the influence of relational leadership on staff nurses' ITS. The study was a correlational, mixed-method, nonexperimental design. A subsample of the Quality Work Environment Study survey data 2009 (n = 415 nurses) was used to test our theoretical model of clinical nurses' ITS as a structural equation model. The model explained 63% of variance in ITS. Organizational commitment, empowerment, and desire to stay were the model concepts with the strongest effects on nurses' ITS. Leadership practices indirectly influenced ITS. How nurses evaluate and respond to their work environment is both an emotional and rational process. Health care organizations need to be cognizant of the influence that nurses' feelings and views of their work setting have on their intention decisions and integrate that knowledge into the development of retention strategies. Leadership practices play an important role in staff nurses' perceptions of the workplace. Identifying the mechanisms by which leadership influences staff nurses' intentions to stay presents additional focus areas for developing retention strategies.

  4. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  5. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  6. Theoretical models for coronary vascular biomechanics: Progress & challenges

    PubMed Central

    Waters, Sarah L.; Alastruey, Jordi; Beard, Daniel A.; Bovendeerd, Peter H.M.; Davies, Peter F.; Jayaraman, Girija; Jensen, Oliver E.; Lee, Jack; Parker, Kim H.; Popel, Aleksander S.; Secomb, Timothy W.; Siebes, Maria; Sherwin, Spencer J.; Shipley, Rebecca J.; Smith, Nicolas P.; van de Vosse, Frans N.

    2013-01-01

    A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741

  7. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  8. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  9. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. New Theoretical Model of Nerve Conduction in Unmyelinated Nerves

    PubMed Central

    Akaishi, Tetsuya

    2017-01-01

    Nerve conduction in unmyelinated fibers has long been described based on the equivalent circuit model and cable theory. However, without the change in ionic concentration gradient across the membrane, there would be no generation or propagation of the action potential. Based on this concept, we employ a new conductive model focusing on the distribution of voltage-gated sodium ion channels and Coulomb force between electrolytes. Based on this new model, the propagation of the nerve conduction was suggested to take place far before the generation of action potential at each channel. We theoretically showed that propagation of action potential, which is enabled by the increasing Coulomb force produced by inflowing sodium ions, from one sodium ion channel to the next sodium channel would be inversely proportionate to the density of sodium channels on the axon membrane. Because the longitudinal number of sodium ion channel would be proportionate to the square root of channel density, the conduction velocity of unmyelinated nerves is theoretically shown to be proportionate to the square root of channel density. Also, from a viewpoint of equilibrium state of channel importation and degeneration, channel density was suggested to be proportionate to axonal diameter. Based on these simple basis, conduction velocity in unmyelinated nerves was theoretically shown to be proportionate to the square root of axonal diameter. This new model would also enable us to acquire more accurate and understandable vision on the phenomena in unmyelinated nerves in addition to the conventional electric circuit model and cable theory. PMID:29081751

  11. Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.

    PubMed

    Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G

    2018-06-01

    Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018

  12. Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.

    PubMed

    Lion, Sébastien

    2018-01-01

    Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.

  13. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  14. Theoretical modeling of critical temperature increase in metamaterial superconductors

    NASA Astrophysics Data System (ADS)

    Smolyaninov, Igor I.; Smolyaninova, Vera N.

    2016-05-01

    Recent experiments have demonstrated that the metamaterial approach is capable of a drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al -A l2O3 ENZ core-shell metamaterials. Here, we perform theoretical modeling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modeling and experimental results in both aluminum- and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium-, Mg B2- , and H2S -based metamaterial superconductors is evaluated. The Mg B2 -based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of a H2S -based metamaterial Tc appears to reach ˜250 K.

  15. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  16. On the feasibility of quantitative ultrasonic determination of fracture toughness: A literature review

    NASA Technical Reports Server (NTRS)

    Fu, L. S.

    1980-01-01

    The three main topics covered are: (1) fracture toughness and microstructure, (2) quantitative ultrasonic and microstructure; and (3) scattering and related mathematical methods. Literature in these areas is reviewed to give insight to the search of a theoretical foundation for quantitative ultrasonic measurement of fracture toughness. The literature review shows that fracture toughness is inherently related to the microstructure and in particular, it depends upon the spacing of inclusions or second particles and the aspect ratio of second phase particles. There are indications that ultrasonic velocity attenuation measurements can be used to determine fracture toughness. The leads to a review of the mathematical models available in solving boundary value problems related to microstructural factors that govern facture toughness and wave motion. A framework towards the theoretical study for the quantitative determination of fracture toughness is described and suggestions for future research are proposed.

  17. A theoretical model of job retention for home health care nurses.

    PubMed

    Ellenbecker, Carol Hall

    2004-08-01

    Predicted severe nursing shortages and an increasing demand for home health care services have made the retention of experienced, qualified nursing staff a priority for health care organizations. The purpose of this paper is to describe a theoretical model of job retention for home health care nurses. The theoretical model is an integration of the findings of empirical research related to intent to stay and retention, components of Neal's theory of home health care nursing practice and findings from earlier work to develop an instrument to measure home health care nurses' job satisfaction. The theoretical model identifies antecedents to job satisfaction of home health care nurses. The antecedents are intrinsic and extrinsic job characteristics. The model also proposes that job satisfaction is directly related to retention and indirectly related to retention though intent to stay. Individual nurse characteristics are indirectly related to retention through intent to stay. The individual characteristic of tenure is indirectly related to retention through autonomy, as an intrinsic characteristic of job satisfaction, and intent to stay. The proposed model can be used to guide research that explores gaps in knowledge about intent to stay and retention among home health care nurses.

  18. Quantitative genetic models of sexual conflict based on interacting phenotypes.

    PubMed

    Moore, Allen J; Pizzari, Tommaso

    2005-05-01

    Evolutionary conflict arises between reproductive partners when alternative reproductive opportunities are available. Sexual conflict can generate sexually antagonistic selection, which mediates sexual selection and intersexual coevolution. However, despite intense interest, the evolutionary implications of sexual conflict remain unresolved. We propose a novel theoretical approach to study the evolution of sexually antagonistic phenotypes based on quantitative genetics and the measure of social selection arising from male-female interactions. We consider the phenotype of one sex as both a genetically influenced evolving trait as well as the (evolving) social environment in which the phenotype of the opposite sex evolves. Several important points emerge from our analysis, including the relationship between direct selection on one sex and indirect effects through selection on the opposite sex. We suggest that the proposed approach may be a valuable tool to complement other theoretical approaches currently used to study sexual conflict. Most importantly, our approach highlights areas where additional empirical data can help clarify the role of sexual conflict in the evolutionary process.

  19. Establishment and validation for the theoretical model of the vehicle airbag

    NASA Astrophysics Data System (ADS)

    Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao

    2015-05-01

    The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.

  20. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  1. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  2. Simple theoretical models for composite rotor blades

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  3. Theoretical foundations for a quantitative approach to paleogenetics. I, II.

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1972-01-01

    It is shown that by neglecting the phenomena of multiple hits, back mutation, and chance coincidence errors larger than 100% can be introduced in the calculated value of the average number of nucleotide base differences to be expected between two homologous polynucleotides. Mathematical formulas are derived to correct quantitatively for these effects. It is pointed out that the effects change materially the quantitative aspects of phylogenics, such as the length of the legs of the trees. A number of problems are solved without approximation.-

  4. Theoretical modeling of critical temperature increase in metamaterial superconductors

    NASA Astrophysics Data System (ADS)

    Smolyaninov, Igor; Smolyaninova, Vera

    Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.

  5. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  6. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    ERIC Educational Resources Information Center

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  7. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    PubMed

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  8. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers

    PubMed Central

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-01

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686

  9. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers.

    PubMed

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-11

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.

  10. Vaporization dynamics of volatile perfluorocarbon droplets: A theoretical model and in vitro validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doinikov, Alexander A., E-mail: doinikov@bsu.by; Bouakaz, Ayache; Sheeran, Paul S.

    2014-10-15

    Purpose: Perfluorocarbon (PFC) microdroplets, called phase-change contrast agents (PCCAs), are a promising tool in ultrasound imaging and therapy. Interest in PCCAs is motivated by the fact that they can be triggered to transition from the liquid state to the gas state by an externally applied acoustic pulse. This property opens up new approaches to applications in ultrasound medicine. Insight into the physics of vaporization of PFC droplets is vital for effective use of PCCAs and for anticipating bioeffects. PCCAs composed of volatile PFCs (with low boiling point) exhibit complex dynamic behavior: after vaporization by a short acoustic pulse, a PFCmore » droplet turns into a vapor bubble which undergoes overexpansion and damped radial oscillation until settling to a final diameter. This behavior has not been well described theoretically so far. The purpose of our study is to develop an improved theoretical model that describes the vaporization dynamics of volatile PFC droplets and to validate this model by comparison with in vitro experimental data. Methods: The derivation of the model is based on applying the mathematical methods of fluid dynamics and thermodynamics to the process of the acoustic vaporization of PFC droplets. The used approach corrects shortcomings of the existing models. The validation of the model is carried out by comparing simulated results with in vitro experimental data acquired by ultrahigh speed video microscopy for octafluoropropane (OFP) and decafluorobutane (DFB) microdroplets of different sizes. Results: The developed theory allows one to simulate the growth of a vapor bubble inside a PFC droplet until the liquid PFC is completely converted into vapor, and the subsequent overexpansion and damped oscillations of the vapor bubble, including the influence of an externally applied acoustic pulse. To evaluate quantitatively the difference between simulated and experimental results, the L2-norm errors were calculated for all cases where

  11. College Students Solving Chemistry Problems: A Theoretical Model of Expertise

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Glynn, Shawn M.

    2009-01-01

    A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…

  12. Self-Assembled Magnetic Surface Swimmers: Theoretical Model

    NASA Astrophysics Data System (ADS)

    Aranson, Igor; Belkin, Maxim; Snezhko, Alexey

    2009-03-01

    The mechanisms of self-propulsion of living microorganisms are a fascinating phenomenon attracting enormous attention in the physics community. A new type of self-assembled micro-swimmers, magnetic snakes, is an excellent tool to model locomotion in a simple table-top experiment. The snakes self-assemble from a dispersion of magnetic microparticles suspended on the liquid-air interface and subjected to an alternating magnetic field. Formation and dynamics of these swimmers are captured in the framework of theoretical model coupling paradigm equation for the amplitude of surface waves, conservation law for the density of particles, and the Navier-Stokes equation for hydrodynamic flows. The results of continuum modeling are supported by hybrid molecular dynamics simulations of magnetic particles floating on the surface of fluid.

  13. Spectrum analysis of radar life signal in the three kinds of theoretical models

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Ma, J. F.; Wang, D.

    2017-02-01

    In the single frequency continuous wave radar life detection system, based on the Doppler effect, the theory model of radar life signal is expressed by the real function, and there is a phenomenon that can't be confirmed by the experiment. When the phase generated by the distance between the measured object and the radar measuring head is л of integer times, the main frequency spectrum of life signal (respiration and heartbeat) is not existed in radar life signal. If this phase is л/2 of odd times, the main frequency spectrum of breath and heartbeat frequency is the strongest. In this paper, we use the Doppler effect as the basic theory, using three different mathematical expressions——real function, complex exponential function and Bessel's function expansion form. They are used to establish the theoretical model of radar life signal. Simulation analysis revealed that the Bessel expansion form theoretical model solve the problem of real function form. Compared with the theoretical model of the complex exponential function, the derived spectral line is greatly reduced in the theoretical model of Bessel expansion form, which is more consistent with the actual situation.

  14. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    PubMed

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  15. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Healing from Childhood Sexual Abuse: A Theoretical Model

    ERIC Educational Resources Information Center

    Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner

    2011-01-01

    Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…

  17. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less

  18. Mathematical modelling and quantitative methods.

    PubMed

    Edler, L; Poirier, K; Dourson, M; Kleiner, J; Mileson, B; Nordmann, H; Renwick, A; Slob, W; Walton, K; Würtzen, G

    2002-01-01

    The present review reports on the mathematical methods and statistical techniques presently available for hazard characterisation. The state of the art of mathematical modelling and quantitative methods used currently for regulatory decision-making in Europe and additional potential methods for risk assessment of chemicals in food and diet are described. Existing practices of JECFA, FDA, EPA, etc., are examined for their similarities and differences. A framework is established for the development of new and improved quantitative methodologies. Areas for refinement, improvement and increase of efficiency of each method are identified in a gap analysis. Based on this critical evaluation, needs for future research are defined. It is concluded from our work that mathematical modelling of the dose-response relationship would improve the risk assessment process. An adequate characterisation of the dose-response relationship by mathematical modelling clearly requires the use of a sufficient number of dose groups to achieve a range of different response levels. This need not necessarily lead to an increase in the total number of animals in the study if an appropriate design is used. Chemical-specific data relating to the mode or mechanism of action and/or the toxicokinetics of the chemical should be used for dose-response characterisation whenever possible. It is concluded that a single method of hazard characterisation would not be suitable for all kinds of risk assessments, and that a range of different approaches is necessary so that the method used is the most appropriate for the data available and for the risk characterisation issue. Future refinements to dose-response characterisation should incorporate more clearly the extent of uncertainty and variability in the resulting output.

  19. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  20. Towards a theoretical model on medicines as a health need.

    PubMed

    Vargas-Peláez, Claudia Marcela; Soares, Luciano; Rover, Marina Raijche Mattozo; Blatt, Carine Raquel; Mantel-Teeuwisse, Aukje; Rossi Buenaventura, Francisco Augusto; Restrepo, Luis Guillermo; Latorre, María Cristina; López, José Julián; Bürgin, María Teresa; Silva, Consuelo; Leite, Silvana Nair; Mareni Rocha, Farias

    2017-04-01

    Medicines are considered one of the main tools of western medicine to resolve health problems. Currently, medicines represent an important share of the countries' healthcare budget. In the Latin America region, access to essential medicines is still a challenge, although countries have established some measures in the last years in order to guarantee equitable access to medicines. A theoretical model is proposed for analysing the social, political, and economic factors that modulate the role of medicines as a health need and their influence on the accessibility and access to medicines. The model was built based on a narrative review about health needs, and followed the conceptual modelling methodology for theory-building. The theoretical model considers elements (stakeholders, policies) that modulate the perception towards medicines as a health need from two perspectives - health and market - at three levels: international, national and local levels. The perception towards medicines as a health need is described according to Bradshaw's categories: felt need, normative need, comparative need and expressed need. When those different categories applied to medicines coincide, the patients get access to the medicines they perceive as a need, but when the categories do not coincide, barriers to access to medicines are created. Our theoretical model, which holds a broader view about the access to medicines, emphasises how power structures, interests, interdependencies, values and principles of the stakeholders could influence the perception towards medicines as a health need and the access to medicines in Latin American countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Theoretical Model for the Practice of Residential Treatment.

    ERIC Educational Resources Information Center

    Miskimins, R. W.

    1990-01-01

    Presents theoretical model describing practice of psychiatric residential treatment for children and adolescents. Emphasis is on 40 practice principles, guiding concepts which dictate specific treatment techniques and administrative procedures for Southern Oregon Adolescent Study and Treatment Center. Groups principles into six clusters: program…

  2. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    PubMed

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Multinational Corporations, Democracy and Child Mortality: A Quantitative, Cross-National Analysis of Developing Countries

    ERIC Educational Resources Information Center

    Shandra, John M.; Nobles, Jenna E.; London, Bruce; Williamson, John B.

    2005-01-01

    This study presents quantitative, sociological models designed to account for cross-national variation in child mortality. We consider variables linked to five different theoretical perspectives that include the economic modernization, social modernization, political modernization, ecological-evolutionary, and dependency perspectives. The study is…

  4. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Theoretical models for supercritical fluid extraction.

    PubMed

    Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan

    2012-08-10

    For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Culture and Developmental Trajectories: A Discussion on Contemporary Theoretical Models

    ERIC Educational Resources Information Center

    de Carvalho, Rafael Vera Cruz; Seidl-de-Moura, Maria Lucia; Martins, Gabriela Dal Forno; Vieira, Mauro Luís

    2014-01-01

    This paper aims to describe, compare and discuss the theoretical models proposed by Patricia Greenfield, Çigdem Kagitçibasi and Heidi Keller. Their models have the common goal of understanding the developmental trajectories of self based on dimensions of autonomy and relatedness that are structured according to specific cultural and environmental…

  7. A theoretical model of water and trade

    NASA Astrophysics Data System (ADS)

    Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie

    2016-03-01

    Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.

  8. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  9. Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes

    NASA Astrophysics Data System (ADS)

    Panchal, Hitesh; Awasthi, Anuradha

    2017-06-01

    In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.

  10. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    NASA Astrophysics Data System (ADS)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  11. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  12. Theoretical study of gas hydrate decomposition kinetics--model development.

    PubMed

    Windmeier, Christoph; Oellrich, Lothar R

    2013-10-10

    In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.

  13. A Game Theoretic Model of Thermonuclear Cyberwar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soper, Braden C.

    In this paper we propose a formal game theoretic model of thermonuclear cyberwar based on ideas found in [1] and [2]. Our intention is that such a game will act as a first step toward building more complete formal models of Cross-Domain Deterrence (CDD). We believe the proposed thermonuclear cyberwar game is an ideal place to start on such an endeavor because the game can be fashioned in a way that is closely related to the classical models of nuclear deterrence [4–6], but with obvious modifications that will help to elucidate the complexities introduced by a second domain. We startmore » with the classical bimatrix nuclear deterrence game based on the game of chicken, but introduce uncertainty via a left-of-launch cyber capability that one or both players may possess.« less

  14. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    PubMed

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Theoretical models for application in school health education research.

    PubMed

    Parcel, G S

    1984-01-01

    Theoretical models that may be useful to research studies in school health education are reviewed. Selected, well-defined theories include social learning theory, problem-behavior theory, theory of reasoned action, communications theory, coping theory, social competence, and social and family theories. Also reviewed are multiple theory models including models of health related-behavior, the PRECEDE Framework, social-psychological approaches and the Activated Health Education Model. Two major reviews of teaching models are also discussed. The paper concludes with a brief outline of the general applications of theory to the field of school health education including applications to basic research, development and design of interventions, program evaluation, and program utilization.

  16. Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder S.; Kinney, Justin B.

    2016-03-01

    A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

  17. Reality-Theoretical Models-Mathematics: A Ternary Perspective on Physics Lessons in Upper-Secondary School

    ERIC Educational Resources Information Center

    Hansson, Lena; Hansson, Örjan; Juter, Kristina; Redfors, Andreas

    2015-01-01

    This article discusses the role of mathematics during physics lessons in upper-secondary school. Mathematics is an inherent part of theoretical models in physics and makes powerful predictions of natural phenomena possible. Ability to use both theoretical models and mathematics is central in physics. This paper takes as a starting point that the…

  18. Quantitative genetic methods depending on the nature of the phenotypic trait.

    PubMed

    de Villemereuil, Pierre

    2018-01-24

    A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.

  19. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  20. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  1. Decision support models for solid waste management: Review and game-theoretic approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr; Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence; Aravossis, Konstantinos

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decisionmore » support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.« less

  2. Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective

    ERIC Educational Resources Information Center

    Peterson, Shari L.

    2004-01-01

    This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…

  3. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  4. Drift mobility of photo-electrons in organic molecular crystals: Quantitative comparison between theory and experiment

    NASA Astrophysics Data System (ADS)

    Reineker, P.; Kenkre, V. M.; Kühne, R.

    1981-08-01

    A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.

  5. Quantitative trait nucleotide analysis using Bayesian model selection.

    PubMed

    Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D

    2005-10-01

    Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.

  6. Nursing management of sensory overload in psychiatry – Theoretical densification and modification of the framework model

    PubMed

    Scheydt, Stefan; Needham, Ian; Behrens, Johann

    2017-01-01

    Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.

  7. Experimental and theoretical analysis of integrated circuit (IC) chips on flexible substrates subjected to bending

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Yuan, Jianghong; Zhang, Yingchao; Huang, Yonggang; Feng, Xue

    2017-10-01

    The interfacial failure of integrated circuit (IC) chips integrated on flexible substrates under bending deformation has been studied theoretically and experimentally. A compressive buckling test is used to impose the bending deformation onto the interface between the IC chip and the flexible substrate quantitatively, after which the failed interface is investigated using scanning electron microscopy. A theoretical model is established based on the beam theory and a bi-layer interface model, from which an analytical expression of the critical curvature in relation to the interfacial failure is obtained. The relationships between the critical curvature, the material, and the geometric parameters of the device are discussed in detail, providing guidance for future optimization flexible circuits based on IC chips.

  8. Anticipatory Cognitive Systems: a Theoretical Model

    NASA Astrophysics Data System (ADS)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  9. A theoretical model of co-worker responses to work reintegration processes.

    PubMed

    Dunstan, Debra A; Maceachen, Ellen

    2014-06-01

    Emerging research has shown that co-workers have a significant influence on the return-to-work outcomes of partially fit ill or injured employees. By drawing on theoretical findings from the human resource and wider behavioral sciences literatures, our goal was to formulate a theoretical model of the influences on and outcomes of co-worker responses within work reintegration. From a search of 15 data bases covering the social sciences, business and medicine, we identified articles containing models of the factors that influence co-workers' responses to disability accommodations; and, the nature and impact of co-workers' behaviors on employee outcomes. To meet our goal, we combined identified models to form a comprehensive model of the relevant factors and relationships. Internal consistency and externally validity were assessed. The combined model illustrates four key findings: (1) co-workers' behaviors towards an accommodated employee are influenced by attributes of that employee, the illness or injury, the co-worker themselves, and the work environment; (2) the influences-behaviour relationship is mediated by perceptions of the fairness of the accommodation; (3) co-workers' behaviors affect all work reintegration outcomes; and (4) co-workers' behaviours can vary from support to antagonism and are moderated by type of support required, the social intensity of the job, and the level of antagonism. Theoretical models from the wider literature are useful for understanding the impact of co-workers on the work reintegration process. To achieve optimal outcomes, co-workers need to perceive the arrangements as fair. Perceptions of fairness might be supported by co-workers' collaborative engagement in the planning, monitoring and review of work reintegration activities.

  10. Lack of quantitative training among early-career ecologists: a survey of the problem and potential solutions

    PubMed Central

    Ezard, Thomas H.G.; Jørgensen, Peter S.; Zimmerman, Naupaka; Chamberlain, Scott; Salguero-Gómez, Roberto; Curran, Timothy J.; Poisot, Timothée

    2014-01-01

    Proficiency in mathematics and statistics is essential to modern ecological science, yet few studies have assessed the level of quantitative training received by ecologists. To do so, we conducted an online survey. The 937 respondents were mostly early-career scientists who studied biology as undergraduates. We found a clear self-perceived lack of quantitative training: 75% were not satisfied with their understanding of mathematical models; 75% felt that the level of mathematics was “too low” in their ecology classes; 90% wanted more mathematics classes for ecologists; and 95% more statistics classes. Respondents thought that 30% of classes in ecology-related degrees should be focused on quantitative disciplines, which is likely higher than for most existing programs. The main suggestion to improve quantitative training was to relate theoretical and statistical modeling to applied ecological problems. Improving quantitative training will require dedicated, quantitative classes for ecology-related degrees that contain good mathematical and statistical practice. PMID:24688862

  11. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  12. The neural mediators of kindness-based meditation: a theoretical model

    PubMed Central

    Mascaro, Jennifer S.; Darcher, Alana; Negi, Lobsang T.; Raison, Charles L.

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374

  13. Performance Theories for Sentence Coding: Some Quantitative Models

    ERIC Educational Resources Information Center

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  14. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    PubMed Central

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-01-01

    Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322

  15. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    NASA Astrophysics Data System (ADS)

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-01

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  16. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.

    PubMed

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-26

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  17. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  18. Information-theoretic model comparison unifies saliency metrics

    PubMed Central

    Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias

    2015-01-01

    Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340

  19. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children

    ERIC Educational Resources Information Center

    Courtright, John A.; Courtright, Illene C.

    1976-01-01

    A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)

  20. Quantitative estimation of minimum offset for multichannel surface-wave survey with actively exciting source

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2006-01-01

    Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.

  1. Direct in situ measurement of coupled magnetostructural evolution in a ferromagnetic shape memory alloy and its theoretical modeling

    DOE PAGES

    Pramanick, Abhijit; Shapiro, Steve M.; Glavic, Artur; ...

    2015-10-14

    In this study, ferromagnetic shape memory alloys (FSMAs) have shown great potential as active components in next generation smart devices due to their exceptionally large magnetic-field-induced strains and fast response times. During application of magnetic fields in FSMAs, as is common in several magnetoelastic smart materials, there occurs simultaneous rotation of magnetic moments and reorientation of twin variants, resolving which, although critical for design of new materials and devices, has been difficult to achieve quantitatively with current characterization methods. At the same time, theoretical modeling of these phenomena also faced limitations due to uncertainties in values of physical properties suchmore » as magnetocrystalline anisotropy energy (MCA), especially for off-stoichiometric FSMA compositions. Here, in situ polarized neutron diffraction is used to measure directly the extents of both magnetic moments rotation and crystallographic twin-reorientation in an FSMA single crystal during the application of magnetic fields. Additionally, high-resolution neutron scattering measurements and first-principles calculations based on fully relativistic density functional theory are used to determine accurately the MCA for the compositionally disordered alloy of Ni 2Mn 1.14Ga 0.86. The results from these state-of-the-art experiments and calculations are self-consistently described within a phenomenological framework, which provides quantitative insights into the energetics of magnetostructural coupling in FSMAs. Based on the current model, the energy for magnetoelastic twin boundaries propagation for the studied alloy is estimated to be ~150kJ/m 3.« less

  2. A Theoretical Model for Thin Film Ferroelectric Coupled Microstripline Phase Shifters

    NASA Technical Reports Server (NTRS)

    Romanofsky, R. R.; Quereshi, A. H.

    2000-01-01

    Novel microwave phase shifters consisting of coupled microstriplines on thin ferroelectric films have been demonstrated recently. A theoretical model useful for predicting the propagation characteristics (insertion phase shift, dielectric loss, impedance, and bandwidth) is presented here. The model is based on a variational solution for line capacitance and coupled strip transmission line theory.

  3. Quantitative assessment model for gastric cancer screening

    PubMed Central

    Chen, Kun; Yu, Wei-Ping; Song, Liang; Zhu, Yi-Min

    2005-01-01

    AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer. METHODS: A case control study was carried on in 66 patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food, etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD). RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively. According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%. Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05). CONCLUSION: The validity of this method is satisfactory. It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer. PMID:15655813

  4. TEST OF A THEORETICAL COMMUTER EXPOSURE MODEL TO VEHICLE EXHAUST IN TRAFFIC

    EPA Science Inventory

    A theoretical model of commuter exposure is presented as a box or cell model with the automobile passenger compartment representing the microenvironment exposed to CO concentrations resulting from vehicle exhaust leaks and emissions from traffic. Equations which describe this sit...

  5. An alternative theoretical model for an anomalous hollow beam.

    PubMed

    Cai, Yangjian; Wang, Zhaoying; Lin, Qiang

    2008-09-15

    An alternative and convenient theoretical model is proposed to describe a flexible anomalous hollow beam of elliptical symmetry with an elliptical solid core, which was observed in experiment recently (Phys. Rev. Lett, 94 (2005) 134802). In this model, the electric field of anomalous hollow beam is expressed as a finite sum of elliptical Gaussian modes. Flattopped beams, dark hollow beams and Gaussian beams are special cases of our model. Analytical propagation formulae for coherent and partially coherent anomalous hollow beams passing through astigmatic ABCD optical systems are derived. Some numerical examples are calculated to show the propagation and focusing properties of coherent and partially coherent anomalous hollow beams.

  6. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  7. A theoretical model for smoking prevention studies in preteen children.

    PubMed

    McGahee, T W; Kemp, V; Tingen, M

    2000-01-01

    The age of the onset of smoking is on a continual decline, with the prime age of tobacco use initiation being 12-14 years. A weakness of the limited research conducted on smoking prevention programs designed for preteen children (ages 10-12) is a well-defined theoretical basis. A theoretical perspective is needed in order to make a meaningful transition from empirical analysis to application of knowledge. Bandura's Social Cognitive Theory (1977, 1986), the Theory of Reasoned Action (Ajzen & Fishbein, 1980), and other literature linking various concepts to smoking behaviors in preteens were used to develop a model that may be useful for smoking prevention studies in preteen children.

  8. Simple control-theoretic models of human steering activity in visually guided vehicle control

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1991-01-01

    A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.

  9. Quantitative assessments of mantle flow models against seismic observations: Influence of uncertainties in mineralogical parameters

    NASA Astrophysics Data System (ADS)

    Schuberth, Bernhard S. A.

    2017-04-01

    One of the major challenges in studies of Earth's deep mantle is to bridge the gap between geophysical hypotheses and observations. The biggest dataset available to investigate the nature of mantle flow are recordings of seismic waveforms. On the other hand, numerical models of mantle convection can be simulated on a routine basis nowadays for earth-like parameters, and modern thermodynamic mineralogical models allow us to translate the predicted temperature field to seismic structures. The great benefit of the mineralogical models is that they provide the full non-linear relation between temperature and seismic velocities and thus ensure a consistent conversion in terms of magnitudes. This opens the possibility for quantitative assessments of the theoretical predictions. The often-adopted comparison between geodynamic and seismic models is unsuitable in this respect owing to the effects of damping, limited resolving power and non-uniqueness inherent to tomographic inversions. The most relevant issue, however, is related to wavefield effects that reduce the magnitude of seismic signals (e.g., traveltimes of waves), a phenomenon called wavefront healing. Over the past couple of years, we have developed an approach that takes the next step towards a quantitative assessment of geodynamic models and that enables us to test the underlying geophysical hypotheses directly against seismic observations. It is based solely on forward modelling and warrants a physically correct treatment of the seismic wave equation without theoretical approximations. Fully synthetic 3-D seismic wavefields are computed using a spectral element method for 3-D seismic structures derived from mantle flow models. This way, synthetic seismograms are generated independent of any seismic observations. Furthermore, through the wavefield simulations, it is possible to relate the magnitude of lateral temperature variations in the dynamic flow simulations directly to body-wave traveltime residuals. The

  10. Quantitative model of diffuse speckle contrast analysis for flow measurement.

    PubMed

    Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua

    2017-07-01

    Diffuse speckle contrast analysis (DSCA) is a noninvasive optical technique capable of monitoring deep tissue blood flow. However, a detailed study of the speckle contrast model for DSCA has yet to be presented. We deduced the theoretical relationship between speckle contrast and exposure time and further simplified it to a linear approximation model. The feasibility of this linear model was validated by the liquid phantoms which demonstrated that the slope of this linear approximation was able to rapidly determine the Brownian diffusion coefficient of the turbid media at multiple distances using multiexposure speckle imaging. Furthermore, we have theoretically quantified the influence of optical property on the measurements of the Brownian diffusion coefficient which was a consequence of the fact that the slope of this linear approximation was demonstrated to be equal to the inverse of correlation time of the speckle.

  11. A utility-theoretic model for QALYs and willingness to pay.

    PubMed

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  12. Investigating Differences in Gas-Phase Conformations of 25-Hydroxyvitamin D3 Sodiated Epimers using Ion Mobility-Mass Spectrometry and Theoretical Modeling

    NASA Astrophysics Data System (ADS)

    Chouinard, Christopher D.; Cruzeiro, Vinícius Wilian D.; Beekman, Christopher R.; Roitberg, Adrian E.; Yost, Richard A.

    2017-08-01

    Drift tube ion mobility coupled with mass spectrometry was used to investigate the gas-phase structure of 25-hydroxyvitamin D3 (25OHD3) and D2 (25OHD2) epimers, and to evaluate its potential in rapid separation of these compounds. Experimental results revealed two distinct drift species for the 25OHD3 sodiated monomer, whereas only one of these conformations was observed for its epimer (epi25OHD3). The unique species allowed 25OHD3 to be readily distinguished, and the same pattern was observed for 25OHD2 epimers. Theoretical modeling of 25OHD3 epimers identified energetically stable gas-phase structures, indicating that both compounds may adopt a compact "closed" conformation, but that 25OHD3 may also adopt a slightly less energetically favorable "open" conformation that is not accessible to its epimer. Calculated theoretical collision cross-sections for these structures agreed with experimental results to <2%. Experimentation indicated that additional energy in the ESI source (i.e., increased temperature, spray voltage) affected the ratio of 25OHD3 conformations, with the less energetically favorable "open" conformation increasing in relative intensity. Finally, LC-IM-MS results yielded linear quantitation of 25OHD3, in the presence of the epimer interference, at biologically relevant concentrations. This study demonstrates that ion mobility can be used in tandem with theoretical modeling to determine structural differences that contribute to drift separation. These separation capabilities provide potential for rapid (<60 ms) identification of 25OHD3 and 25OHD2 in mixtures with their epimers.

  13. Theoretical modeling of the catch-slip bond transition in biological adhesion

    NASA Astrophysics Data System (ADS)

    Gunnerson, Kim; Pereverzev, Yuriy; Prezhdo, Oleg

    2006-05-01

    The mechanism by which leukocytes leave the blood stream and enter inflamed tissue is called extravasation. This process is facilitated by the ability of selectin proteins, produced by the endothelial cells of blood vessels, to form transient bonds with the leukocytes. In the case of P-selectin, the protein bonds with P-selectin glycoprotein ligands (PSGL-1) produced by the leukocyte. Recent atomic force microscopy and flow chamber analyses of the binding of P-selectin to PSGL-1 provide evidence for an unusual biphasic catch-bond/slip-bond behavior in response to the strength of exerted force. This biphasic process is not well-understood. There are several theoretical models for describing this phenomenon. These models use different profiles for potential energy landscapes and how they change under forces. We are exploring these changes using molecular dynamics. We will present a simple theoretical model as well as share some of our early MD results for describing this phenomenon.

  14. Experimental and Theoretical Basis for a Closed-Form Spectral BRDF Model

    DTIC Science & Technology

    2015-09-17

    EXPERIMENTAL AND THEORETICAL BASIS FOR A CLOSED-FORM SPECTRAL BRDF MODEL DISSERTATION Samuel D. Butler, Major, USAF AFIT-ENP-DS-15-S-021 DEPARTMENT...SPECTRAL BRDF MODEL DISSERTATION Presented to the Faculty Graduate School of Engineering and Management Air Force Institute of Technology Air University Air...FOR A CLOSED-FORM SPECTRAL BRDF MODEL DISSERTATION Samuel D. Butler, BS, MS Major, USAF Committee Membership: Michael A. Marciniak, PhD Chairman Kevin

  15. Redesigning Orientation in an Intensive Care Unit Using 2 Theoretical Models.

    PubMed

    Kozub, Elizabeth; Hibanada-Laserna, Maribel; Harget, Gwen; Ecoff, Laurie

    2015-01-01

    To accommodate a higher demand for critical care nurses, an orientation program in a surgical intensive care unit was revised and streamlined. Two theoretical models served as a foundation for the revision and resulted in clear clinical benchmarks for orientation progress evaluation. The purpose of the project was to integrate theoretical frameworks into practice to improve the unit orientation program. Performance improvement methods served as a framework for the revision, and outcomes were measured before and after implementation. The revised orientation program increased 1- and 2-year nurse retention and decreased turnover. Critical care knowledge increased after orientation for both the preintervention and postintervention groups. Incorporating a theoretical basis for orientation has been shown to be successful in increasing the number of nurses completing orientation and improving retention, turnover rates, and knowledge gained.

  16. Molecular dynamics simulations of theoretical cellulose nanotube models.

    PubMed

    Uto, Takuya; Kodama, Yuta; Miyata, Tatsuhiko; Yui, Toshifumi

    2018-06-15

    Nanotubes are remarkable nanoscale architectures for a wide range of potential applications. In the present paper, we report a molecular dynamics (MD) study of the theoretical cellulose nanotube (CelNT) models to evaluate their dynamic behavior in solution (either chloroform or benzene). Based on the one-quarter chain staggering relationship, we constructed six CelNT models by combining the two chain polarities (parallel (P) and antiparallel (AP)) and three symmetry operations (helical right (H R ), helical left (H L ), and rotation (R)) to generate a circular arrangement of molecular chains. Among the four models that retained the tubular form (P-H R , P-H L , P-R, and AP-R), the P-R and AP-R models have the lowest steric energies in benzene and chloroform, respectively. The structural features of the CelNT models were characterized in terms of the hydroxymethyl group conformation and intermolecular hydrogen bonds. Solvent structuring more clearly occurred with benzene than chloroform, suggesting that the CelNT models may disperse in benzene. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. An Emerging Theoretical Model of Music Therapy Student Development.

    PubMed

    Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E

    2017-07-01

    Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  18. How trees allocate carbon for optimal growth: insight from a game-theoretic model.

    PubMed

    Fu, Liyong; Sun, Lidan; Han, Hao; Jiang, Libo; Zhu, Sheng; Ye, Meixia; Tang, Shouzheng; Huang, Minren; Wu, Rongling

    2017-02-01

    How trees allocate photosynthetic products to primary height growth and secondary radial growth reflects their capacity to best use environmental resources. Despite substantial efforts to explore tree height-diameter relationship empirically and through theoretical modeling, our understanding of the biological mechanisms that govern this phenomenon is still limited. By thinking of stem woody biomass production as an ecological system of apical and lateral growth components, we implement game theory to model and discern how these two components cooperate symbiotically with each other or compete for resources to determine the size of a tree stem. This resulting allometry game theory is further embedded within a genetic mapping and association paradigm, allowing the genetic loci mediating the carbon allocation of stemwood growth to be characterized and mapped throughout the genome. Allometry game theory was validated by analyzing a mapping data of stem height and diameter growth over perennial seasons in a poplar tree. Several key quantitative trait loci were found to interpret the process and pattern of stemwood growth through regulating the ecological interactions of stem apical and lateral growth. The application of allometry game theory enables the prediction of the situations in which the cooperation, competition or altruism is an optimal decision of a tree to fully use the environmental resources it owns. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  20. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    ERIC Educational Resources Information Center

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  1. The Theoretical Basis of the Effective School Improvement Model (ESI)

    ERIC Educational Resources Information Center

    Scheerens, Jaap; Demeuse, Marc

    2005-01-01

    This article describes the process of theoretical reflection that preceded the development and empirical verification of a model of "effective school improvement". The focus is on basic mechanisms that could be seen as underlying "getting things in motion" and change in education systems. Four mechanisms are distinguished:…

  2. Path Analysis Tests of Theoretical Models of Children's Memory Performance

    ERIC Educational Resources Information Center

    DeMarie, Darlene; Miller, Patricia H.; Ferron, John; Cunningham, Walter R.

    2004-01-01

    Path analysis was used to test theoretical models of relations among variables known to predict differences in children's memory--strategies, capacity, and metamemory. Children in kindergarten to fourth grade (chronological ages 5 to 11) performed different memory tasks. Several strategies (i.e., sorting, clustering, rehearsal, and self-testing)…

  3. A Game-Theoretic Model of Grounding for Referential Communication Tasks

    ERIC Educational Resources Information Center

    Thompson, William

    2009-01-01

    Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…

  4. E-Learning Systems Support of Collaborative Agreements: A Theoretical Model

    ERIC Educational Resources Information Center

    Aguirre, Sandra; Quemada, Juan

    2012-01-01

    This paper introduces a theoretical model for developing integrated degree programmes through e-learning systems as stipulated by a collaboration agreement signed by two universities. We have analysed several collaboration agreements between universities at the national, European, and transatlantic level as well as various e-learning frameworks. A…

  5. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  6. A quantitative systems physiology model of renal function and blood pressure regulation: Model description.

    PubMed

    Hallow, K M; Gebremichael, Y

    2017-06-01

    Renal function plays a central role in cardiovascular, kidney, and multiple other diseases, and many existing and novel therapies act through renal mechanisms. Even with decades of accumulated knowledge of renal physiology, pathophysiology, and pharmacology, the dynamics of renal function remain difficult to understand and predict, often resulting in unexpected or counterintuitive therapy responses. Quantitative systems pharmacology modeling of renal function integrates this accumulated knowledge into a quantitative framework, allowing evaluation of competing hypotheses, identification of knowledge gaps, and generation of new experimentally testable hypotheses. Here we present a model of renal physiology and control mechanisms involved in maintaining sodium and water homeostasis. This model represents the core renal physiological processes involved in many research questions in drug development. The model runs in R and the code is made available. In a companion article, we present a case study using the model to explore mechanisms and pharmacology of salt-sensitive hypertension. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  7. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    PubMed

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  8. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    PubMed Central

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567

  9. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    NASA Astrophysics Data System (ADS)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  10. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  11. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  12. FDTD-based quantitative analysis of terahertz wave detection for multilayered structures.

    PubMed

    Tu, Wanli; Zhong, Shuncong; Shen, Yaochun; Zhou, Qing; Yao, Ligang

    2014-10-01

    Experimental investigations have shown that terahertz pulsed imaging (TPI) is able to quantitatively characterize a range of multilayered media (e.g., biological issues, pharmaceutical tablet coatings, layered polymer composites, etc.). Advanced modeling of the interaction of terahertz radiation with a multilayered medium is required to enable the wide application of terahertz technology in a number of emerging fields, including nondestructive testing. Indeed, there have already been many theoretical analyses performed on the propagation of terahertz radiation in various multilayered media. However, to date, most of these studies used 1D or 2D models, and the dispersive nature of the dielectric layers was not considered or was simplified. In the present work, the theoretical framework of using terahertz waves for the quantitative characterization of multilayered media was established. A 3D model based on the finite difference time domain (FDTD) method is proposed. A batch of pharmaceutical tablets with a single coating layer of different coating thicknesses and different refractive indices was modeled. The reflected terahertz wave from such a sample was computed using the FDTD method, assuming that the incident terahertz wave is broadband, covering a frequency range up to 3.5 THz. The simulated results for all of the pharmaceutical-coated tablets considered were found to be in good agreement with the experimental results obtained using a commercial TPI system. In addition, we studied a three-layered medium to mimic the occurrence of defects in the sample.

  13. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  14. A Transformative Model for Undergraduate Quantitative Biology Education

    PubMed Central

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  15. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire

  16. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  17. Control Theoretic Modeling and Generated Flow Patterns of a Fish-Tail Robot

    NASA Astrophysics Data System (ADS)

    Massey, Brian; Morgansen, Kristi; Dabiri, Dana

    2003-11-01

    Many real-world engineering problems involve understanding and manipulating fluid flows. One of the challenges to further progress in the area of active flow control is the lack of appropriate models that are amenable to control-theoretic studies and algorithm design and also incorporate reasonably realistic fluid dynamic effects. We focus here on modeling and model-verification of bio-inspired actuators (fish-fin type structures) used to control fluid dynamic artifacts that will affect speed, agility, and stealth of Underwater Autonomous Vehicles (UAVs). Vehicles using fish-tail type systems are more maneuverable, can turn in much shorter and more constrained spaces, have lower drag, are quieter and potentially more efficient than those using propellers. We will present control-theoretic models for a simple prototype coupled fluid and mechanical actuator where fluid effects are crudely modeled by assuming only lift, drag, and added mass, while neglecting boundary effects. These models will be tested with different control input parameters on an experimental fish-tail robot with the resulting flow captured with DPIV. Relations between the model, the control function choices, the obtained thrust and drag, and the corresponding flow patterns will be presented and discussed.

  18. Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test

    ERIC Educational Resources Information Center

    Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.

    2006-01-01

    A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…

  19. Proposal for a quantitative index of flood disasters.

    PubMed

    Feng, Lihua; Luo, Gaoyuan

    2010-07-01

    Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.

  20. Quantitative Systems Pharmacology: A Case for Disease Models.

    PubMed

    Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C

    2017-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  1. Organizational Learning and Product Design Management: Towards a Theoretical Model.

    ERIC Educational Resources Information Center

    Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael

    2003-01-01

    Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…

  2. Theoretical modeling and experimental analyses of laminated wood composite poles

    Treesearch

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  3. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  4. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  5. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  6. Information-Theoretic Perspectives on Geophysical Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  7. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377

  8. Theoretical modeling of time-dependent skin temperature and heat losses during whole-body cryotherapy: A pilot study.

    PubMed

    Polidori, G; Marreiro, A; Pron, H; Lestriez, P; Boyer, F C; Quinart, H; Tourbah, A; Taïar, R

    2016-11-01

    This article establishes the basics of a theoretical model for the constitutive law that describes the skin temperature and thermolysis heat losses undergone by a subject during a session of whole-body cryotherapy (WBC). This study focuses on the few minutes during which the human body is subjected to a thermal shock. The relationship between skin temperature and thermolysis heat losses during this period is still unknown and have not yet been studied in the context of the whole human body. The analytical approach here is based on the hypothesis that the skin thermal shock during a WBC session can be thermally modelled by the sum of both radiative and free convective heat transfer functions. The validation of this scientific approach and the derivation of temporal evolution thermal laws, both on skin temperature and dissipated thermal power during the thermal shock open many avenues of large scale studies with the aim of proposing individualized cryotherapy protocols as well as protocols intended for target populations. Furthermore, this study shows quantitatively the substantial imbalance between human metabolism and thermolysis during WBC, the explanation of which remains an open question. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Application of a theoretical model to evaluate COPD disease management.

    PubMed

    Lemmens, Karin M M; Nieboer, Anna P; Rutten-Van Mölken, Maureen P M H; van Schayck, Constant P; Asin, Javier D; Dirven, Jos A M; Huijsman, Robbert

    2010-03-26

    Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD) on process, intermediate and final outcomes of care in a general practice setting. A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour) and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences) were obtained from questionnaires and electronic registries. Implementation of the programme was associated with significant improvements in dyspnoea (p < 0.001) and patient experiences (p < 0.001). No significant improvement was found in mean quality of life scores. Improvements were found in several intermediate outcomes, including investment beliefs (p < 0.05), disease-specific knowledge (p < 0.01; p < 0.001) and medication compliance (p < 0.01). Overall, process improvement was established. The model showed associations between significantly improved intermediate outcomes and improvements in quality of life and dyspnoea. The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.

  10. Application of a theoretical model to evaluate COPD disease management

    PubMed Central

    2010-01-01

    Background Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD) on process, intermediate and final outcomes of care in a general practice setting. Methods A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour) and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences) were obtained from questionnaires and electronic registries. Results Implementation of the programme was associated with significant improvements in dyspnoea (p < 0.001) and patient experiences (p < 0.001). No significant improvement was found in mean quality of life scores. Improvements were found in several intermediate outcomes, including investment beliefs (p < 0.05), disease-specific knowledge (p < 0.01; p < 0.001) and medication compliance (p < 0.01). Overall, process improvement was established. The model showed associations between significantly improved intermediate outcomes and improvements in quality of life and dyspnoea. Conclusions The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can

  11. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-06-01

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. Copyright © 2018 Cognitive Science Society, Inc.

  12. [Theoretical model study about the application risk of high risk medical equipment].

    PubMed

    Shang, Changhao; Yang, Fenghui

    2014-11-01

    Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.

  13. A control theoretic model of driver steering behavior

    NASA Technical Reports Server (NTRS)

    Donges, E.

    1977-01-01

    A quantitative description of driver steering behavior such as a mathematical model is presented. The steering task is divided into two levels: (1) the guidance level involving the perception of the instantaneous and future course of the forcing function provided by the forward view of the road, and the response to it in an anticipatory open-loop control mode; (2) the stabilization level whereby any occuring deviations from the forcing function are compensated for in a closed-loop control mode. This concept of the duality of the driver's steering activity led to a newly developed two-level model of driver steering behavior. Its parameters are identified on the basis of data measured in driving simulator experiments. The parameter estimates of both levels of the model show significant dependence on the experimental situation which can be characterized by variables such as vehicle speed and desired path curvature.

  14. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.

    PubMed

    Pang, Wei; Coghill, George M

    2015-05-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  15. Indirect scaling methods for testing quantitative emotion theories.

    PubMed

    Junge, Martin; Reisenzein, Rainer

    2013-01-01

    Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.

  16. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for

  17. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  18. Stellar granulation as seen in disk-integrated intensity. II. Theoretical scaling relations compared with observations

    NASA Astrophysics Data System (ADS)

    Samadi, R.; Belkacem, K.; Ludwig, H.-G.; Caffau, E.; Campante, T. L.; Davies, G. R.; Kallinger, T.; Lund, M. N.; Mosser, B.; Baglin, A.; Mathur, S.; Garcia, R. A.

    2013-11-01

    Context. A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (τeff) and the root-mean-square (rms) brightness fluctuations (σ) associated with the granulation scale as a function of the peak frequency (νmax) of the solar-like oscillations. Aims: We aim at providing a theoretical background to the observed scaling relations based on a model developed in Paper I. Methods: We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model published in Paper I. For each PDS we derived the associated characteristic time (τeff) and the rms brightness fluctuations (σ) and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the measurements made on a large set of Kepler targets. Results: We derive theoretical scaling relations for τeff and σ, which show the same dependence on νmax as the observed scaling relations. In addition, we show that these quantities also scale as a function of the turbulent Mach number (ℳa) estimated at the photosphere. The theoretical scaling relations for τeff and σ match the observations well on a global scale. Quantitatively, the remaining discrepancies with the observations are found to be much smaller than previous theoretical calculations made for red giants. Conclusions: Our modelling provides additional theoretical support for the observed variations of σ and τeff with νmax. It also highlights the important role of ℳa in controlling the properties of the stellar granulation. However, the observations made with Kepler on a wide variety of stars cannot confirm the dependence of our scaling relations on ℳa. Measurements of the granulation background and

  19. Information-theoretic approach to interactive learning

    NASA Astrophysics Data System (ADS)

    Still, S.

    2009-01-01

    The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.

  20. Field-theoretic approach to fluctuation effects in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less

  1. Symbolic interactionism as a theoretical perspective for multiple method research.

    PubMed

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  2. Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.

    PubMed

    Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D

    2018-05-04

    A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.

  3. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  4. Quantitative modeling of reservoir-triggered seismicity

    NASA Astrophysics Data System (ADS)

    Hainzl, S.; Catalli, F.; Dahm, T.; Heinicke, J.; Woith, H.

    2017-12-01

    Reservoir-triggered seismicity might occur as the response to the crustal stress caused by the poroelastic response to the weight of the water volume and fluid diffusion. Several cases of high correlations have been found in the past decades. However, crustal stresses might be altered by many other processes such as continuous tectonic stressing and coseismic stress changes. Because reservoir-triggered stresses decay quickly with distance, even tidal or rainfall-triggered stresses might be of similar size at depth. To account for simultaneous stress sources in a physically meaningful way, we apply a seismicity model based on calculated stress changes in the crust and laboratory-derived friction laws. Based on the observed seismicity, the model parameters can be determined by maximum likelihood method. The model leads to quantitative predictions of the variations of seismicity rate in space and time which can be used for hypothesis testing and forecasting. For case studies in Talala (India), Val d'Agri (Italy) and Novy Kostel (Czech Republic), we show the comparison of predicted and observed seismicity, demonstrating the potential and limitations of the approach.

  5. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  6. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  7. Coupling biology and oceanography in models.

    PubMed

    Fennel, W; Neumann, T

    2001-08-01

    The dynamics of marine ecosystems, i.e. the changes of observable chemical-biological quantities in space and time, are driven by biological and physical processes. Predictions of future developments of marine systems need a theoretical framework, i.e. models, solidly based on research and understanding of the different processes involved. The natural way to describe marine systems theoretically seems to be the embedding of chemical-biological models into circulation models. However, while circulation models are relatively advanced the quantitative theoretical description of chemical-biological processes lags behind. This paper discusses some of the approaches and problems in the development of consistent theories and indicates the beneficial potential of the coupling of marine biology and oceanography in models.

  8. Theoretical analysis of Lumry-Eyring models in differential scanning calorimetry

    PubMed Central

    Sanchez-Ruiz, Jose M.

    1992-01-01

    A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim. PMID:19431826

  9. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of

  10. Patient perceptions of patient-centred care: empirical test of a theoretical model.

    PubMed

    Rathert, Cheryl; Williams, Eric S; McCaughey, Deirdre; Ishqaidef, Ghadir

    2015-04-01

    Patient perception measures are gaining increasing interest among scholars and practitioners. The aim of this study was to empirically examine a conceptual model of patient-centred care using patient perception survey data. Patient-centred care is one of the Institute of Medicine's objectives for improving health care in the 21st century. Patient interviews conducted by the Picker Institute/Commonwealth Fund in the 1980s resulted in a theoretical model and survey questions with dimensions and attributes patients defined as patient-centered. The present study used survey data from patients with overnight visits at 142 U.S. hospitals. Regression analysis found significant support for the theoretical model. Perceptions of emotional support had the strongest relationship with overall care ratings. Coordination of care, and physical comfort were strongly related as well. Understanding how patients experience their care can help improve understanding of what patients believe is patient-centred, and of how care processes relate to important patient outcomes. © 2012 John Wiley & Sons Ltd.

  11. Models and Messengers of Resilience: A Theoretical Model of College Students' Resilience, Regulatory Strategy Use, and Academic Achievement

    ERIC Educational Resources Information Center

    Johnson, Marcus L.; Taasoobshirazi, Gita; Kestler, Jessica L.; Cordova, Jackie R.

    2015-01-01

    We tested a theoretical model of college students' ratings of messengers of resilience and models of resilience, students' own perceived resilience, regulatory strategy use and achievement. A total of 116 undergraduates participated in this study. The results of a path analysis indicated that ratings of models of resilience had a direct effect on…

  12. Quantitative Systems Pharmacology: A Case for Disease Models

    PubMed Central

    Ramanujan, S; Schmidt, BJ; Ghobrial, OG; Lu, J; Heatherington, AC

    2016-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model‐informed drug discovery and development, supporting program decisions from exploratory research through late‐stage clinical trials. In this commentary, we discuss the unique value of disease‐scale “platform” QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. PMID:27709613

  13. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  14. A Systematic Quantitative-Qualitative Model: How To Evaluate Professional Services

    ERIC Educational Resources Information Center

    Yoda, Koji

    1973-01-01

    The proposed evaluation model provides for the assignment of relative weights to each criterion, and establishes a weighting system for calculating a quantitative-qualitative raw score for each service activity of a faculty member being reviewed. (Author)

  15. Human judgment vs. quantitative models for the management of ecological resources.

    PubMed

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  16. A theoretical model of speed-dependent steering torque for rolling tyres

    NASA Astrophysics Data System (ADS)

    Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing

    2016-04-01

    It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.

  17. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    NASA Astrophysics Data System (ADS)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  18. Exploring patient satisfaction predictors in relation to a theoretical model.

    PubMed

    Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil

    2013-01-01

    The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.

  19. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  20. [Self-Determination in Medical Rehabilitation - Development of a Conceptual Model for Further Theoretical Discussion].

    PubMed

    Senin, Tatjana; Meyer, Thorsten

    2018-01-22

    Aim was to gather theoretical knowledge about self-determination and to develop a conceptual model for medical rehabilitation- which serves as a basis for discussion. We performed a literature research in electronic databases. Various theories and research results were adopted and transferred to the context of medical rehabilitation and into a conceptual model. The conceptual model of self-determination reflects on a continuum which forms of self-determination may be present in situations of medical rehabilitation treatments. The location on the continuum depends theoretically on the manifestation of certain internal and external factors that may influence each other. The model provides a first conceptualization of self-determination focusing on medical rehabilitation which should be further refined and tested empirically. © Georg Thieme Verlag KG Stuttgart · New York.

  1. A comparative study of a theoretical neural net model with MEG data from epileptic patients and normal individuals.

    PubMed

    Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D

    2005-09-07

    The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.

  2. Quantitative analysis of ultrasonic images of fibrotic liver using co-occurrence matrix based on multi-Rayleigh model

    NASA Astrophysics Data System (ADS)

    Isono, Hiroshi; Hirata, Shinnosuke; Hachiya, Hiroyuki

    2015-07-01

    In medical ultrasonic images of liver disease, a texture with a speckle pattern indicates a microscopic structure such as nodules surrounded by fibrous tissues in hepatitis or cirrhosis. We have been applying texture analysis based on a co-occurrence matrix to ultrasonic images of fibrotic liver for quantitative tissue characterization. A co-occurrence matrix consists of the probability distribution of brightness of pixel pairs specified with spatial parameters and gives new information on liver disease. Ultrasonic images of different types of fibrotic liver were simulated and the texture-feature contrast was calculated to quantify the co-occurrence matrices generated from the images. The results show that the contrast converges with a value that can be theoretically estimated using a multi-Rayleigh model of echo signal amplitude distribution. We also found that the contrast value increases as liver fibrosis progresses and fluctuates depending on the size of fibrotic structure.

  3. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  4. How Career Variety Promotes the Adaptability of Managers: A Theoretical Model

    ERIC Educational Resources Information Center

    Karaevli, Ayse; Tim Hall, Douglas T.

    2006-01-01

    This paper presents a theoretical model showing how managerial adaptability develops from career variety over the span of the person's career. By building on the literature of career theory, adult learning and development, and career adjustment, we offer a new conceptualization of managerial adaptability by identifying its behavioral, cognitive,…

  5. Models of the Bilingual Lexicon and Their Theoretical Implications for CLIL

    ERIC Educational Resources Information Center

    Heine, Lena

    2014-01-01

    Although many advances have been made in recent years concerning the theoretical dimensions of content and language integrated learning (CLIL), research still has to meet the necessity to come up with integrative models that adequately map the interrelation between content and language learning in CLIL contexts. This article will suggest that…

  6. Physics of human cooperation: experimental evidence and theoretical models

    NASA Astrophysics Data System (ADS)

    Sánchez, Angel

    2018-02-01

    In recent years, many physicists have used evolutionary game theory combined with a complex systems perspective in an attempt to understand social phenomena and challenges. Prominent among such phenomena is the issue of the emergence and sustainability of cooperation in a networked world of selfish or self-focused individuals. The vast majority of research done by physicists on these questions is theoretical, and is almost always posed in terms of agent-based models. Unfortunately, more often than not such models ignore a number of facts that are well established experimentally, and are thus rendered irrelevant to actual social applications. I here summarize some of the facts that any realistic model should incorporate and take into account, discuss important aspects underlying the relation between theory and experiments, and discuss future directions for research based on the available experimental knowledge.

  7. Effect of differentiation of self on adolescent risk behavior: test of the theoretical model.

    PubMed

    Knauth, Donna G; Skowron, Elizabeth A; Escobar, Melicia

    2006-01-01

    Innovative theoretical models are needed to explain the occurrence of high-risk sexual behaviors, alcohol and other-drug (AOD) use, and academic engagement among ethnically diverse, inner-city adolescents. The aim of this study was to test the credibility of a theoretical model based on the Bowen family systems theory to explain adolescent risk behavior. Specifically tested was the relationship between the predictor variables of differentiation of self, chronic anxiety, and social problem solving and the dependent variables of high-risk sexual behaviors, AOD use, and academic engagement. An ex post facto cross-sectional design was used to test the usefulness of the theoretical model. Data were collected from 161 racially/ethnically diverse, inner-city high school students, 14 to 19 years of age. Participants completed self-report written questionnaires, including the Differentiation of Self Inventory, State-Trait Anxiety Inventory, Social Problem Solving for Adolescents, Drug Involvement Scale for Adolescents, and the Sexual Behavior Questionnaire. Consistent with the model, higher levels of differentiation of self related to lower levels of chronic anxiety (p < .001) and higher levels of social problem solving (p < .01). Higher chronic anxiety was related to lower social problem solving (p < .001). A test of mediation showed that chronic anxiety mediates the relationship between differentiation of self and social problem solving (p < .001), indicating that differentiation influences social problem solving through chronic anxiety. Higher levels of social problem solving were related to less drug use (p < .05), less high-risk sexual behaviors (p < .01), and an increase in academic engagement (p < .01). Findings support the theoretical model's credibility and provide evidence that differentiation of self is an important cognitive factor that enables adolescents to manage chronic anxiety and motivates them to use effective problem solving, resulting in less

  8. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  9. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  10. Theoretical models of the electrical discharge machining process. III. The variable mass, cylindrical plasma model

    NASA Astrophysics Data System (ADS)

    Eubank, Philip T.; Patel, Mukund R.; Barrufet, Maria A.; Bozkurt, Bedri

    1993-06-01

    A variable mass, cylindrical plasma model (VMCPM) is developed for sparks created by electrical discharge in a liquid media. The model consist of three differential equations—one each from fluid dynamics, an energy balance, and the radiation equation—combined with a plasma equation of state. A thermophysical property subroutine allows realistic estimation of plasma enthalpy, mass density, and particle fractions by inclusion of the heats of dissociation and ionization for a plasma created from deionized water. Problems with the zero-time boundary conditions are overcome by an electron balance procedure. Numerical solution of the model provides plasma radius, temperature, pressure, and mass as a function of pulse time for fixed current, electrode gap, and power fraction remaining in the plasma. Moderately high temperatures (≳5000 K) and pressures (≳4 bar) persist in the sparks even after long pulse times (to ˜500 μs). Quantitative proof that superheating is the dominant mechanism for electrical discharge machining (EDM) erosion is thus provided for the first time. Some quantitative inconsistencies developed between our (1) cathode, (2) anode, and (3) plasma models (this series) are discussed with indication as to how they will be rectified in a fourth article to follow shortly in this journal. While containing oversimplifications, these three models are believed to contain the respective dominant physics of the EDM process but need be brought into numerical consistency for each time increment of the numerical solution.

  11. The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.

    PubMed

    de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M

    2017-06-01

    This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation

  12. Recent progress in the theoretical modelling of Cepheids and RR Lyrae stars

    NASA Astrophysics Data System (ADS)

    Marconi, Marcella

    2017-09-01

    Cepheids and RR Lyrae are among the most important primary distance indicators to calibrate the extragalactic distance ladder and excellent stellar population tracers, for Population I and Population II, respectively. In this paper I first mention some recent theoretical studies of Cepheids and RR Lyrae obtained with different theoretical tools. Then I focus the attention on new results based on nonlinear convective pulsation models in the context of some international projects, including VMC@VISTA and the Gaia collaboration. The open problems for both Cepheids and RR Lyrae are briefly discussed together with some challenging future application.

  13. A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids

    PubMed Central

    Dai, Fu-Zhi; Zhou, Yanchun

    2016-01-01

    Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165

  14. Developing a theoretical maintenance model for disordered eating in Type 1 diabetes.

    PubMed

    Treasure, J; Kan, C; Stephenson, L; Warren, E; Smith, E; Heller, S; Ismail, K

    2015-12-01

    According to the literature, eating disorders are an increasing problem for more than a quarter of people with Type 1 diabetes and they are associated with accentuated diabetic complications. The clinical outcomes in this group when given standard eating disorder treatments are disappointing. The Medical Research Council guidelines for developing complex interventions suggest that the first step is to develop a theoretical model. To review existing literature to build a theoretical maintenance model for disordered eating in people with Type 1 diabetes. The literature in diabetes relating to models of eating disorder (Fairburn's transdiagnostic model and the dual pathway model) and food addiction was examined and assimilated. The elements common to all eating disorder models include weight/shape concern and problems with mood regulation. The predisposing traits of perfectionism, low self-esteem and low body esteem and the interpersonal difficulties from the transdiagnostic model are also relevant to diabetes. The differences include the use of insulin mismanagement to compensate for breaking eating rules and the consequential wide variations in plasma glucose that may predispose to 'food addiction'. Eating disorder symptoms elicit emotionally driven reactions and behaviours from others close to the individual affected and these are accentuated in the context of diabetes. The next stage is to test the assumptions within the maintenance model with experimental medicine studies to facilitate the development of new technologies aimed at increasing inhibitory processes and moderating environmental triggers. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  15. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    NASA Astrophysics Data System (ADS)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  16. Theoretical model predictions and experimental results for a wavelength switchable Tm:YAG laser.

    PubMed

    Niu, Yanxiong; Wang, Caili; Liu, Wenwen; Niu, Haisha; Xu, Bing; Man, Da

    2014-07-01

    We present a theoretical model study of a quasi-three-level laser with particular attention given to the Tm:YAG laser. The oscillating conditions of this laser were theoretically analyzed from the point of the pump threshold while taking into account reabsorption loss. The laser oscillation at 2.02 μm with large stimulated emission sections was suppressed by selecting the appropriate coating for the cavity mirrors, then an efficient laser-diode side-pumped continuous-wave Tm:YAG crystal laser operating at 2.07 μm was realized. Experiments with the Tm:YAG laser confirmed the accuracy of the model, and the model was able to accurately predict that the high Stark sub-level within the H36 ground state manifold has a low laser threshold and long laser wavelength, which was achieved by decreasing the transmission of the output coupler.

  17. Global Quantitative Modeling of Chromatin Factor Interactions

    PubMed Central

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  18. Development Mechanism of an Integrated Model for Training of a Specialist and Conceptual-Theoretical Activity of a Teacher

    ERIC Educational Resources Information Center

    Marasulov, Akhmat; Saipov, Amangeldi; ?rymbayeva, Kulimkhan; Zhiyentayeva, Begaim; Demeuov, Akhan; Konakbaeva, Ulzhamal; Bekbolatova, Akbota

    2016-01-01

    The aim of the study is to examine the methodological-theoretical construction bases for development mechanism of an integrated model for a specialist's training and teacher's conceptual-theoretical activity. Using the methods of generalization of teaching experience, pedagogical modeling and forecasting, the authors determine the urgent problems…

  19. Theoretical model of gravitational perturbation of current collector axisymmetric flow field

    NASA Astrophysics Data System (ADS)

    Walker, John S.; Brown, Samuel H.; Sondergaard, Neal A.

    1989-03-01

    Some designs of liquid metal collectors in homopolar motors and generators are essentially rotating liquid metal fluids in cylindrical channels with free surfaces and will, at critical rotational speeds, become unstable. The role of gravity in modifying this ejection instability is investigated. Some gravitational effects can be theoretically treated by perturbation techniques on the axisymmetric base flow of the liquid metal. This leads to a modification of previously calculated critical current collector ejection values neglecting gravity effects. The derivation of the mathematical model which determines the perturbation of the liquid metal base flow due to gravitational effects is documented. Since gravity is a small force compared with the centrifugal effects, the base flow solutions can be expanded in inverse powers of the Froude number and modified liquid flow profiles can be determined as a function of the azimuthal angle. This model will be used in later work to theoretically study the effects of gravity on the ejection point of the current collector. A rederivation of the hydrodynamic instability threshold of a liquid metal current collector is presented.

  20. Theoretical model of gravitational perturbation of current collector axisymmetric flow field

    NASA Astrophysics Data System (ADS)

    Walker, John S.; Brown, Samuel H.; Sondergaard, Neal A.

    1990-05-01

    Some designs of liquid-metal current collectors in homopolar motors and generators are essentially rotating liquid-metal fluids in cylindrical channels with free surfaces and will, at critical rotational speeds, become unstable. An investigation at David Taylor Research Center is being performed to understand the role of gravity in modifying this ejection instability. Some gravitational effects can be theoretically treated by perturbation techniques on the axisymmetric base flow of the liquid metal. This leads to a modification of previously calculated critical-current-collector ejection values neglecting gravity effects. The purpose of this paper is to document the derivation of the mathematical model which determines the perturbation of the liquid-metal base flow due to gravitational effects. Since gravity is a small force compared with the centrifugal effects, the base flow solutions can be expanded in inverse powers of the Froude number and modified liquid-flow profiles can be determined as a function of the azimuthal angle. This model will be used in later work to theoretically study the effects of gravity on the ejection point of the current collector.

  1. Theoretical Model for Cellular Shapes Driven by Protrusive and Adhesive Forces

    PubMed Central

    Kabaso, Doron; Shlomovitz, Roie; Schloen, Kathrin; Stradal, Theresia; Gov, Nir S.

    2011-01-01

    The forces that arise from the actin cytoskeleton play a crucial role in determining the cell shape. These include protrusive forces due to actin polymerization and adhesion to the external matrix. We present here a theoretical model for the cellular shapes resulting from the feedback between the membrane shape and the forces acting on the membrane, mediated by curvature-sensitive membrane complexes of a convex shape. In previous theoretical studies we have investigated the regimes of linear instability where spontaneous formation of cellular protrusions is initiated. Here we calculate the evolution of a two dimensional cell contour beyond the linear regime and determine the final steady-state shapes arising within the model. We find that shapes driven by adhesion or by actin polymerization (lamellipodia) have very different morphologies, as observed in cells. Furthermore, we find that as the strength of the protrusive forces diminish, the system approaches a stabilization of a periodic pattern of protrusions. This result can provide an explanation for a number of puzzling experimental observations regarding cellular shape dependence on the properties of the extra-cellular matrix. PMID:21573201

  2. Entropic and Electrostatic Effects on the Folding Free Energy of a Surface-Attached Biomolecule: An Experimental and Theoretical Study

    PubMed Central

    Watkins, Herschel M.; Vallée-Bélisle, Alexis; Ricci, Francesco; Makarov, Dmitrii E.; Plaxco, Kevin W.

    2012-01-01

    Surface-tethered biomolecules play key roles in many biological processes and biotechnologies. However, while the physical consequences of such surface attachment have seen significant theoretical study, to date this issue has seen relatively little experimental investigation. In response we present here a quantitative experimental and theoretical study of the extent to which attachment to a charged –but otherwise apparently inert– surface alters the folding free energy of a simple biomolecule. Specifically, we have measured the folding free energy of a DNA stem loop both in solution and when site-specifically attached to a negatively charged, hydroxyl-alkane-coated gold surface. We find that, whereas surface attachment is destabilizing at low ionic strength it becomes stabilizing at ionic strengths above ~130 mM. This behavior presumably reflects two competing mechanisms: excluded volume effects, which stabilize the folded conformation by reducing the entropy of the unfolded state, and electrostatics, which, at lower ionic strengths, destabilizes the more compact folded state via repulsion from the negatively charged surface. To test this hypothesis we have employed existing theories of the electrostatics of surface-bound polyelectrolytes and the entropy of surface-bound polymers to model both effects. Despite lacking any fitted parameters, these theoretical models quantitatively fit our experimental results, suggesting that, for this system, current knowledge of both surface electrostatics and excluded volume effects is reasonably complete and accurate. PMID:22239220

  3. Theoretical study of solvent effects on the coil-globule transition

    NASA Astrophysics Data System (ADS)

    Polson, James M.; Opps, Sheldon B.; Abou Risk, Nicholas

    2009-06-01

    The coil-globule transition of a polymer in a solvent has been studied using Monte Carlo simulations of a single chain subject to intramolecular interactions as well as a solvent-mediated effective potential. This solvation potential was calculated using several different theoretical approaches for two simple polymer/solvent models, each employing hard-sphere chains and hard-sphere solvent particles as well as attractive square-well potentials between some interaction sites. For each model, collapse is driven by variation in a parameter which changes the energy mismatch between monomers and solvent particles. The solvation potentials were calculated using two fundamentally different methodologies, each designed to predict the conformational behavior of polymers in solution: (1) the polymer reference interaction site model (PRISM) theory and (2) a many-body solvation potential (MBSP) based on scaled particle theory introduced by Grayce [J. Chem. Phys. 106, 5171 (1997)]. For the PRISM calculations, two well-studied solvation monomer-monomer pair potentials were employed, each distinguished by the closure relation used in its derivation: (i) a hypernetted-chain (HNC)-type potential and (ii) a Percus-Yevick (PY)-type potential. The theoretical predictions were each compared to results obtained from explicit-solvent discontinuous molecular dynamics simulations on the same polymer/solvent model systems [J. Chem. Phys. 125, 194904 (2006)]. In each case, the variation in the coil-globule transition properties with solvent density is mostly qualitatively correct, though the quantitative agreement between the theory and prediction is typically poor. The HNC-type potential yields results that are more qualitatively consistent with simulation. The conformational behavior of the polymer upon collapse predicted by the MBSP approach is quantitatively correct for low and moderate solvent densities but is increasingly less accurate for higher densities. At high solvent densities

  4. The experimental-theoretical model of the jet HF induction discharge of atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Gainullin, R.; Kirpichnikov, A.

    2017-11-01

    The paper considers theexperimental-theoretical model devised to determine the regularities of the quasi-stationary electromagnetic field structure of the HFI discharge burning in the inductor of finite dimensions at atmospheric pressure.

  5. Dementia Grief: A Theoretical Model of a Unique Grief Experience

    PubMed Central

    Blandin, Kesstan; Pepin, Renee

    2016-01-01

    Previous literature reveals a high prevalence of grief in dementia caregivers before physical death of the person with dementia that is associated with stress, burden, and depression. To date, theoretical models and therapeutic interventions with grief in caregivers have not adequately considered the grief process, but instead have focused on grief as a symptom that manifests within the process of caregiving. The Dementia Grief Model explicates the unique process of pre-death grief in dementia caregivers. In this paper we introduce the Dementia Grief Model, describe the unique characteristics dementia grief, and present the psychological states associated with the process of dementia grief. The model explicates an iterative grief process involving three states – separation, liminality, and re-emergence – each with a dynamic mechanism that facilitates or hinders movement through the dementia grief process. Finally, we offer potential applied research questions informed by the model. PMID:25883036

  6. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  7. Falling Chains as Variable-Mass Systems: Theoretical Model and Experimental Analysis

    ERIC Educational Resources Information Center

    de Sousa, Celia A.; Gordo, Paulo M.; Costa, Pedro

    2012-01-01

    In this paper, we revisit, theoretically and experimentally, the fall of a folded U-chain and of a pile-chain. The model calculation implies the division of the whole system into two subsystems of variable mass, allowing us to explore the role of tensional contact forces at the boundary of the subsystems. This justifies, for instance, that the…

  8. Oxidative dissolution of silver nanoparticles: A new theoretical approach.

    PubMed

    Adamczyk, Zbigniew; Oćwieja, Magdalena; Mrowiec, Halina; Walas, Stanisław; Lupa, Dawid

    2016-05-01

    A general model of an oxidative dissolution of silver particle suspensions was developed that rigorously considers the bulk and surface solute transport. A two-step surface reaction scheme was proposed that comprises the formation of the silver oxide phase by direct oxidation and the acidic dissolution of this phase leading to silver ion release. By considering this, a complete set of equations is formulated describing oxygen and silver ion transport to and from particles' surfaces. These equations are solved in some limiting cases of nanoparticle dissolution in dilute suspensions. The obtained kinetic equations were used for the interpretation of experimental data pertinent to the dissolution kinetics of citrate-stabilized silver nanoparticles. In these kinetic measurements the role of pH and bulk suspension concentration was quantitatively evaluated by using the atomic absorption spectrometry (AAS). It was shown that the theoretical model adequately reflects the main features of the experimental results, especially the significant increase in the dissolution rate for lower pH. Also the presence of two kinetic regimes was quantitatively explained in terms of the decrease in the coverage of the fast dissolving oxide layer. The overall silver dissolution rate constants characterizing these two regimes were determined. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Theoretical Models and Operational Frameworks in Public Health Ethics

    PubMed Central

    Petrini, Carlo

    2010-01-01

    The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441

  10. Linking agent-based models and stochastic models of financial markets

    PubMed Central

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene

    2012-01-01

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  11. Linking agent-based models and stochastic models of financial markets.

    PubMed

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  12. Health Professionals' Explanations of Suicidal Behaviour: Effects of Professional Group, Theoretical Intervention Model, and Patient Suicide Experience.

    PubMed

    Rothes, Inês Areal; Henriques, Margarida Rangel

    2017-12-01

    In a help relation with a suicidal person, the theoretical models of suicidality can be essential to guide the health professional's comprehension of the client/patient. The objectives of this study were to identify health professionals' explanations of suicidal behaviors and to study the effects of professional group, theoretical intervention models, and patient suicide experience in professionals' representations. Two hundred and forty-two health professionals filled out a self-report questionnaire. Exploratory principal components analysis was used. Five explanatory models were identified: psychological suffering, affective cognitive, sociocommunicational, adverse life events, and psychopathological. Results indicated that the psychological suffering and psychopathological models were the most valued by the professionals, while the sociocommunicational was seen as the least likely to explain suicidal behavior. Differences between professional groups were found. We concluded that training and reflection on theoretical models in general and in communicative issues in particular are needed in the education of health professionals.

  13. Molecular and Cellular Quantitative Microscopy: theoretical investigations, technological developments and applications to neurobiology

    NASA Astrophysics Data System (ADS)

    Esposito, Alessandro

    2006-05-01

    This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These techniques have the capability to quantitatively probe the biochemical environment of fluorophores. An automated microscope capable of unsupervised operation has been developed that enables the investigation of molecular and cellular properties at high throughput levels and the analysis of cellular heterogeneity. State-of-the-art Förster Resonance Energy Transfer imaging, Fluorescence Lifetime Imaging Microscopy, Confocal Laser Scanning Microscopy and the newly developed tools have been combined with cellular and molecular biology techniques for the investigation of protein-protein interactions, oligomerization and post-translational modifications of α-Synuclein and Tau, two proteins involved in Parkinson’s and Alzheimer’s disease, respectively. The high inter-disciplinarity of this project required the merging of the expertise of both the Molecular Biophysics Group at the Debye Institute - Utrecht University and the Cell Biophysics Group at the European Neuroscience Institute - Gαttingen University. This project was conducted also with the support and the collaboration of the Center for the Molecular Physiology of the Brain (Göttingen), particularly with the groups associated with the Molecular Quantitative Microscopy and Parkinson’s Disease and Aggregopathies areas. This work demonstrates that molecular and cellular quantitative microscopy can be used in combination with high-throughput screening as a powerful tool for the investigation of the molecular mechanisms of complex biological phenomena like those occurring in neurodegenerative diseases.

  14. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  15. The interrogation decision-making model: A general theoretical framework for confessions.

    PubMed

    Yang, Yueran; Guyll, Max; Madon, Stephanie

    2017-02-01

    This article presents a new model of confessions referred to as the interrogation decision-making model . This model provides a theoretical umbrella with which to understand and analyze suspects' decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects' decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Hardening of particle/oil/water suspensions due to capillary bridges: Experimental yield stress and theoretical interpretation.

    PubMed

    Danov, Krassimir D; Georgiev, Mihail T; Kralchevsky, Peter A; Radulova, Gergana M; Gurkov, Theodor D; Stoyanov, Simeon D; Pelan, Eddie G

    2018-01-01

    Suspensions of colloid particles possess the remarkable property to solidify upon the addition of minimal amount of a second liquid that preferentially wets the particles. The hardening is due to the formation of capillary bridges (pendular rings), which connect the particles. Here, we review works on the mechanical properties of such suspensions and related works on the capillary-bridge force, and present new rheological data for the weakly studied concentration range 30-55 vol% particles. The mechanical strength of the solidified capillary suspensions, characterized by the yield stress Y, is measured at the elastic limit for various volume fractions of the particles and the preferentially wetting liquid. A quantitative theoretical model is developed, which relates Y with the maximum of the capillary-bridge force, projected on the shear plane. A semi-empirical expression for the mean number of capillary bridges per particle is proposed. The model agrees very well with the experimental data and gives a quantitative description of the yield stress, which increases with the rise of interfacial tension and with the volume fractions of particles and capillary bridges, but decreases with the rise of particle radius and contact angle. The quantitative description of capillary force is based on the exact theory and numerical calculation of the capillary bridge profile at various bridge volumes and contact angles. An analytical formula for Y is also derived. The comparison of the theoretical and experimental strain at the elastic limit reveals that the fluidization of the capillary suspension takes place only in a deformation zone of thickness up to several hundred particle diameters, which is adjacent to the rheometer's mobile plate. The reported experimental results refer to water-continuous suspension with hydrophobic particles and oily capillary bridges. The comparison of data for bridges from soybean oil and hexadecane surprisingly indicate that the yield strength is

  17. Neuroergonomics: Quantitative Modeling of Individual, Shared, and Team Neurodynamic Information.

    PubMed

    Stevens, Ronald H; Galloway, Trysha L; Willemsen-Dunlap, Ann

    2018-06-01

    The aim of this study was to use the same quantitative measure and scale to directly compare the neurodynamic information/organizations of individual team members with those of the team. Team processes are difficult to separate from those of individual team members due to the lack of quantitative measures that can be applied to both process sets. Second-by-second symbolic representations were created of each team member's electroencephalographic power, and quantitative estimates of their neurodynamic organizations were calculated from the Shannon entropy of the symbolic data streams. The information in the neurodynamic data streams of health care ( n = 24), submarine navigation ( n = 12), and high school problem-solving ( n = 13) dyads was separated into the information of each team member, the information shared by team members, and the overall team information. Most of the team information was the sum of each individual's neurodynamic information. The remaining team information was shared among the team members. This shared information averaged ~15% of the individual information, with momentary levels of 1% to 80%. Continuous quantitative estimates can be made from the shared, individual, and team neurodynamic information about the contributions of different team members to the overall neurodynamic organization of a team and the neurodynamic interdependencies among the team members. Information models provide a generalizable quantitative method for separating a team's neurodynamic organization into that of individual team members and that shared among team members.

  18. The Adaptation of the Immigrant Second Generation in America: Theoretical Overview and Recent Evidence

    PubMed Central

    Portes, Alejandro; Fernández-Kelly, Patricia; Haller, William

    2013-01-01

    This paper summarises a research program on the new immigrant second generation initiated in the early 1990s and completed in 2006. The four field waves of the Children of Immigrants Longitudinal Study (CILS) are described and the main theoretical models emerging from it are presented and graphically summarised. After considering critical views of this theory, we present the most recent results from this longitudinal research program in the forum of quantitative models predicting downward assimilation in early adulthood and qualitative interviews identifying ways to escape it by disadvantaged children of immigrants. Quantitative results strongly support the predicted effects of exogenous variables identified by segmented assimilation theory and identify the intervening factors during adolescence that mediate their influence on adult outcomes. Qualitative evidence gathered during the last stage of the study points to three factors that can lead to exceptional educational achievement among disadvantaged youths. All three indicate the positive influence of selective acculturation. Implications of these findings for theory and policy are discussed. PMID:23626483

  19. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  20. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  1. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    PubMed

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic

  2. Generalized Constitutive-Based Theoretical and Empirical Models for Hot Working Behavior of Functionally Graded Steels

    NASA Astrophysics Data System (ADS)

    Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas

    2013-07-01

    Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.

  3. Electronic health record acceptance by physicians: testing an integrated theoretical model.

    PubMed

    Gagnon, Marie-Pierre; Ghandour, El Kebir; Talla, Pascaline Kengne; Simonyan, David; Godin, Gaston; Labrecque, Michel; Ouimet, Mathieu; Rousseau, Michel

    2014-04-01

    Several countries are in the process of implementing an Electronic Health Record (EHR), but limited physicians' acceptance of this technology presents a serious threat to its successful implementation. The aim of this study was to identify the main determinants of physician acceptance of EHR in a sample of general practitioners and specialists of the Province of Quebec (Canada). We sent an electronic questionnaire to physician members of the Quebec Medical Association. We tested four theoretical models (Technology acceptance model (TAM), Extended TAM, Psychosocial Model, and Integrated Model) using path analysis and multiple linear regression analysis in order to identify the main determinants of physicians' intention to use the EHR. We evaluated the modifying effect of sociodemographic characteristics using multi-group analysis of structural weights invariance. A total of 157 questionnaires were returned. The four models performed well and explained between 44% and 55% of the variance in physicians' intention to use the EHR. The Integrated model performed the best and showed that perceived ease of use, professional norm, social norm, and demonstrability of the results are the strongest predictors of physicians' intention to use the EHR. Age, gender, previous experience and specialty modified the association between those determinants and intention. The proposed integrated theoretical model is useful in identifying which factors could motivate physicians from different backgrounds to use the EHR. Physicians who perceive the EHR to be easy to use, coherent with their professional norms, supported by their peers and patients, and able to demonstrate tangible results are more likely to accept this technology. Age, gender, specialty and experience should also be taken into account when developing EHR implementation strategies targeting physicians. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.

    PubMed

    Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P

    2017-01-11

    Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.

  5. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    PubMed

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  6. Theoretical aspect of suitable spatial boundary condition specified for adjoint model on limited area

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Wu, Rongsheng

    2001-12-01

    Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.

  7. Education, Labour Market and Human Capital Models: Swedish Experiences and Theoretical Analyses.

    ERIC Educational Resources Information Center

    Sohlman, Asa

    An empirical study concerning development of the Swedish educational system from a labor market point of view, and a theoretical study on human capital models are discussed. In "Education and Labour Market; The Swedish Experience 1900-1975," attention is directed to the following concerns: the official educational policy regarding…

  8. Quantitative biology: where modern biology meets physical sciences.

    PubMed

    Shekhar, Shashank; Zhu, Lian; Mazutis, Linas; Sgro, Allyson E; Fai, Thomas G; Podolski, Marija

    2014-11-05

    Quantitative methods and approaches have been playing an increasingly important role in cell biology in recent years. They involve making accurate measurements to test a predefined hypothesis in order to compare experimental data with predictions generated by theoretical models, an approach that has benefited physicists for decades. Building quantitative models in experimental biology not only has led to discoveries of counterintuitive phenomena but has also opened up novel research directions. To make the biological sciences more quantitative, we believe a two-pronged approach needs to be taken. First, graduate training needs to be revamped to ensure biology students are adequately trained in physical and mathematical sciences and vice versa. Second, students of both the biological and the physical sciences need to be provided adequate opportunities for hands-on engagement with the methods and approaches necessary to be able to work at the intersection of the biological and physical sciences. We present the annual Physiology Course organized at the Marine Biological Laboratory (Woods Hole, MA) as a case study for a hands-on training program that gives young scientists the opportunity not only to acquire the tools of quantitative biology but also to develop the necessary thought processes that will enable them to bridge the gap between these disciplines. © 2014 Shekhar, Zhu, Mazutis, Sgro, Fai, and Podolski. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  9. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  10. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  11. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  12. Using Mathematics, Mathematical Applications, Mathematical Modelling, and Mathematical Literacy: A Theoretical Study

    ERIC Educational Resources Information Center

    Mumcu, Hayal Yavuz

    2016-01-01

    The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…

  13. [Social determinants of odontalgia in epidemiological studies: theoretical review and proposed conceptual model].

    PubMed

    Bastos, João Luiz Dornelles; Gigante, Denise Petrucci; Peres, Karen Glazer; Nedel, Fúlvio Borges

    2007-01-01

    The epidemiological literature has been limited by the absence of a theoretical framework reflecting the complexity of causal mechanisms for the occurrence of health phenomena / disease conditions. In the field of oral epidemiology, such lack of theory also prevails, since dental caries the leading topic in oral research has been often studied through a biological and reductionist viewpoint. One of the most important consequences of dental caries is dental pain (odontalgia), which has received little attention in studies with sophisticated theoretical models and powerful designs to establish causal relationships. The purpose of this study is to review the scientific literature on the determinants of odontalgia and to discuss theories proposed for the explanation of the phenomenon. Conceptual models and emerging theories on the social determinants of oral health are revised, in an attempt to build up links with the bio-psychosocial pain model, proposing a more elaborate causal model for odontalgia. The framework suggests causal pathways between social structure and oral health through material, psychosocial and behavioral pathways. Aspects of the social structure are highlighted in order to relate them to odontalgia, stressing their importance in discussions of causal relationships in oral health research.

  14. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    PubMed

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...

  16. Exploring the relationship between volunteering and hospice sustainability in the UK: a theoretical model.

    PubMed

    Scott, Ros; Jindal-Snape, Divya; Manwaring, Gaye

    2018-05-02

    To explore the relationship between volunteering and the sustainability of UK voluntary hospices. A narrative literature review was conducted to inform the development of a theoretical model. Eight databases were searched: CINAHL (EBSCO), British Nursing Index, Intute: Health and Life Sciences, ERIC, SCOPUS, ASSIA (CSA), Cochrane Library and Google Scholar. A total of 90 documents were analysed. Emerging themes included the importance of volunteering to the hospice economy and workforce, the quality of services, and public and community support. Findings suggest that hospice sustainability is dependent on volunteers; however, the supply and retention of volunteers is affected by internal and external factors. A theoretical model was developed to illustrate the relationship between volunteering and hospice sustainability. It demonstrates the factors necessary for hospice sustainability and the reciprocal impact that these factors and volunteering have on each other. The model has a practical application as an assessment framework and strategic planning tool.

  17. Developing a theoretical model and questionnaire survey instrument to measure the success of electronic health records in residential aged care.

    PubMed

    Yu, Ping; Qian, Siyu

    2018-01-01

    Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables-training, self-efficacy, system quality and information quality-on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time.

  18. Multi-scale Modeling of Chromosomal DNA in Living Cells

    NASA Astrophysics Data System (ADS)

    Spakowitz, Andrew

    The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).

  19. Theoretical model to explain the problem-solving process in physics

    NASA Astrophysics Data System (ADS)

    Lopez, Carlos

    2011-03-01

    This work reports a theoretical model developed with the aim to explain the mental mechanisms of knowledge building during the problem-solving process in physics using a hybrid approach of assimilation- formation of concepts. The model has been termed conceptual chains and represents graphic diagrams of conceptual dependency, which have yielded information about the background knowledge required during the learning process, as well as about the formation of diverse structures that correspond to distinct forms of networking concepts Additionally, the conceptual constructs of the model have been classified according to five types of knowledge. Evidence was found about the influence of these structures, as well as of the distinct types of knowledge about the degree of difficulty of the problems. I want to be grateful to Laureate International Universities, Baltimore M.D., USA, for the financing granted for the accomplishment of this work.

  20. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  1. The Roy Adaptation Model: A Theoretical Framework for Nurses Providing Care to Individuals With Anorexia Nervosa.

    PubMed

    Jennings, Karen M

    Using a nursing theoretical framework to understand, elucidate, and propose nursing research is fundamental to knowledge development. This article presents the Roy Adaptation Model as a theoretical framework to better understand individuals with anorexia nervosa during acute treatment, and the role of nursing assessments and interventions in the promotion of weight restoration. Nursing assessments and interventions situated within the Roy Adaptation Model take into consideration how weight restoration does not occur in isolation but rather reflects an adaptive process within external and internal environments, and has the potential for more holistic care.

  2. Allostatic load: A theoretical model for understanding the relationship between maternal posttraumatic stress disorder and adverse birth outcomes.

    PubMed

    Li, Yang; Rosemberg, Marie-Anne Sanon; Seng, Julia S

    2018-07-01

    Adverse birth outcomes such as preterm birth and low birth weight are significant public health concerns and contribute to neonatal morbidity and mortality. Studies have increasingly been exploring the predictive effects of maternal posttraumatic stress disorder (PTSD) on adverse birth outcomes. However, the biological mechanisms by which maternal PTSD affects birth outcomes are not well understood. Allostatic load refers to the cumulative dysregulations of the multiple physiological systems as a response to multiple social-ecological levels of chronic stress. Allostatic load has been well documented in relation to both chronic stress and adverse health outcomes in non-pregnant populations. However, the mediating role of allostatic load is less understood when it comes to maternal PTSD and adverse birth outcomes. To propose a theoretical model that depicts how allostatic load could mediate the impact of maternal PTSD on birth outcomes. We followed the procedures for theory synthesis approach described by Walker and Avant (2011), including specifying focal concepts, identifying related factors and relationships, and constructing an integrated representation. We first present a theoretical overview of the allostatic load theory and the other 4 relevant theoretical models. Then we provide a brief narrative review of literature that empirically supports the propositions of the integrated model. Finally, we describe our theoretical model. The theoretical model synthesized has the potential to advance perinatal research by delineating multiple biomarkers to be used in future. After it is well validated, it could be utilized as the theoretical basis for health care professionals to identify high-risk women by evaluating their experiences of psychosocial and traumatic stress and to develop and evaluate service delivery and clinical interventions that might modify maternal perceptions or experiences of stress and eliminate their impacts on adverse birth outcomes. Copyright

  3. Game-Theoretic Models of Information Overload in Social Networks

    NASA Astrophysics Data System (ADS)

    Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin

    We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.

  4. Python for Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano

    2008-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557

  5. Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.

    PubMed

    Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning

    2016-10-01

    To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.

  6. Affective Change in Psychodynamic Psychotherapy: Theoretical Models and Clinical Approaches to Changing Emotions.

    PubMed

    Subic-Wrana, Claudia; Greenberg, Leslie S; Lane, Richard D; Michal, Matthias; Wiltink, Jörg; Beutel, Manfred E

    2016-09-01

    Affective change has been considered the hallmark of therapeutic change in psychoanalysis. Psychoanalytic writers have begun to incorporate theoretically the advanced understanding of emotional processing and transformation of the affective neurosciences. We ask if this theoretical advancement is reflected in treatment techniques addressing the processing of emotion. We review psychoanalytic models and treatment recommendations of maladaptive affect processing in the light of a neuroscientifically informed model of achieving psychotherapeutic change by activation and reconsolidation of emotional memory. Emotions tend to be treated as other mental contents, resulting in a lack of specific psychodynamic techniques to work with emotions. Manualized technical modifications addressing affect regulation have been successfully tested in patients with personality pathology, but not for psychodynamic treatments of axis I disorders. Emotional memories need to be activated in order to be modified, therefore, we propose to include techniques into psychodynamic therapy that stimulate emotional experience.

  7. A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes

    NASA Astrophysics Data System (ADS)

    Olsen, Seth

    2012-04-01

    We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.

  8. Mechanisms of plasma-assisted catalyzed growth of carbon nanofibres: a theoretical modeling

    NASA Astrophysics Data System (ADS)

    Gupta, R.; Sharma, S. C.; Sharma, R.

    2017-02-01

    A theoretical model is developed to study the nucleation and catalytic growth of carbon nanofibers (CNFs) in a plasma environment. The model includes the charging of CNFs, the kinetics of the plasma species (neutrals, ions and electrons), plasma pretreatment of the catalyst film, and various processes unique to a plasma-exposed catalyst surface such as adsorption of neutrals, thermal dissociation of neutrals, ion induced dissociation, interaction between neutral species, stress exerted by the growing graphene layers and the growth of CNFs. Numerical calculations are carried out for typical glow discharge plasma parameters. It is found that the growth rate of CNFs decreases with the catalyst nanoparticle size. In addition, the effect of hydrogen on the catalyst nanoparticle size, CNF tip diameter, CNF growth rate, and the tilt angle of the graphene layers to the fiber axis are investigated. Moreover, it is also found that the length of CNFs increases with hydrocarbon number density. Our theoretical findings are in good agreement with experimental observations and can be extended to enhance the field emission characteristics of CNFs.

  9. Quantitating Antibody Uptake In Vivo: Conditional Dependence on Antigen Expression Levels

    PubMed Central

    Thurber, Greg M.; Weissleder, Ralph

    2010-01-01

    Purpose Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Procedures Using a cell line with high EpCAM expression and moderate EGFR expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high affinity antibodies. Results As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. Conclusions These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes. PMID:20809210

  10. Quantitating antibody uptake in vivo: conditional dependence on antigen expression levels.

    PubMed

    Thurber, Greg M; Weissleder, Ralph

    2011-08-01

    Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Using a cell line with high epithelial cell adhesion molecule expression and moderate epidermal growth factor receptor expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high-affinity antibodies. As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes.

  11. A theoretical physicist's journey into biology: from quarks and strings to cells and whales.

    PubMed

    West, Geoffrey B

    2014-10-08

    Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.

  12. Modeling Theory of Mind and Cognitive Appraisal with Decision-Theoretic Agents

    DTIC Science & Technology

    2011-04-07

    following key factors: Consistency: People expect, prefer, and are driven to maintain consistency, and avoid cognitive dissonance , be- tween beliefs...Modeling Theory of Mind and Cognitive Appraisal with Decision-Theoretic Agents David V. Pynadath1, Mei Si2, and Stacy C. Marsella1 1Institute for...capacity in appraisal and social emotions, as well as arguing for a uniform process for emotion and cognition . 1 Report Documentation Page Form

  13. Qualitative and Quantitative Distinctions in Personality Disorder

    PubMed Central

    Wright, Aidan G. C.

    2011-01-01

    The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676

  14. Comparison of Quantitative and Qualitative Research Traditions: Epistemological, Theoretical, and Methodological Differences

    ERIC Educational Resources Information Center

    Yilmaz, Kaya

    2013-01-01

    There has been much discussion about quantitative and qualitative approaches to research in different disciplines. In the behavioural and social sciences, these two paradigms are compared to reveal their relative strengths and weaknesses. But the debate about both traditions has commonly taken place in academic books. It is hard to find an article…

  15. Wires in the soup: quantitative models of cell signaling

    PubMed Central

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  16. An assessment of some theoretical models used for the calculation of the refractive index of InXGa1-xAs

    NASA Astrophysics Data System (ADS)

    Engelbrecht, J. A. A.

    2018-04-01

    Theoretical models used for the determination of the refractive index of InXGa1-XAs are reviewed and compared. Attention is drawn to some problems experienced with some of the models. Models also extended to the mid-infrared region of the electromagnetic spectrum. Theoretical results in the mid-infrared region are then compared to previously published experimental results.

  17. On the Usefulness of Narratives: An Interdisciplinary Review and Theoretical Model.

    PubMed

    Shaffer, Victoria A; Focella, Elizabeth S; Hathaway, Andrew; Scherer, Laura D; Zikmund-Fisher, Brian J

    2018-04-19

    How can we use stories from other people to promote better health experiences, improve judgments about health, and increase the quality of medical decisions without introducing bias, systematically persuading the listeners to change their attitudes, or altering behaviors in nonoptimal ways? More practically, should narratives be used in health education, promotion, or behavior change interventions? In this article, we address these questions by conducting a narrative review of a diverse body of literature on narratives from several disciplines to gain a better understanding about what narratives do, including their role in communication, engagement, recall, persuasion, and health behavior change. We also review broad theories about information processing and persuasion from psychology and more specific models about narrative messaging found in the health communication and marketing literatures to provide insight into the processes by which narratives have their effect on health behavior. To address major gaps in our theoretical understanding about how narratives work and what effects they will have on health behavior, we propose the Narrative Immersion Model, whose goal is to identify the parameters that predict the specific impact of a particular narrative (e.g. persuade, inform, comfort, etc.) based on the type of narrative message (e.g. process, experience, or outcome narrative). Further, the Narrative Immersion Model describes the magnitude of the effect as increasing through successive layers of engagement with the narrative: interest, identification, and immersion. Finally, the Narrative Immersion Model identifies characteristics of the narrative intervention that encourage greater immersion within a given narrative. We believe there are important communication gaps in areas areas of behavioral medicine that could be addressed with narratives; however, more work is needed in order to employ narrative messaging systematically. The Narrative Immersion Model

  18. A New Theoretical Approach to Postsecondary Student Disability: Disability-Diversity (Dis)Connect Model

    ERIC Educational Resources Information Center

    Aquino, Katherine C.

    2016-01-01

    Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…

  19. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  20. Development of theoretical models of integrated millimeter wave antennas

    NASA Technical Reports Server (NTRS)

    Yngvesson, K. Sigfrid; Schaubert, Daniel H.

    1991-01-01

    Extensive radiation patterns for Linear Tapered Slot Antenna (LTSA) Single Elements are presented. The directivity of LTSA elements is predicted correctly by taking the cross polarized pattern into account. A moment method program predicts radiation patterns for air LTSAs with excellent agreement with experimental data. A moment method program was also developed for the task LTSA Array Modeling. Computations performed with this program are in excellent agreement with published results for dipole and monopole arrays, and with waveguide simulator experiments, for more complicated structures. Empirical modeling of LTSA arrays demonstrated that the maximum theoretical element gain can be obtained. Formulations were also developed for calculating the aperture efficiency of LTSA arrays used in reflector systems. It was shown that LTSA arrays used in multibeam systems have a considerable advantage in terms of higher packing density, compared with waveguide feeds. Conversion loss of 10 dB was demonstrated at 35 GHz.

  1. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  2. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  3. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  4. Droplet size in flow: Theoretical model and application to polymer blends

    NASA Astrophysics Data System (ADS)

    Fortelný, Ivan; Jůza, Josef

    2017-05-01

    The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.

  5. Advanced quantitative measurement methodology in physics education research

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and novice understanding. Quantitative assessment is an important area in PER. Developing research-based effective assessment instruments and making meaningful inferences based on these instruments have always been important goals of the PER community. Quantitative studies are often conducted to provide bases for test development and result interpretation. Statistics are frequently used in quantitative studies. The selection of statistical methods and interpretation of the results obtained by these methods shall be connected to the education background. In this connecting process, the issues of educational models are often raised. Many widely used statistical methods do not make assumptions on the mental structure of subjects, nor do they provide explanations tailored to the educational audience. There are also other methods that consider the mental structure and are tailored to provide strong connections between statistics and education. These methods often involve model assumption and parameter estimation, and are complicated mathematically. The dissertation provides a practical view of some advanced quantitative assessment methods. The common feature of these methods is that they all make educational/psychological model assumptions beyond the minimum mathematical model. The purpose of the study is to provide a comparison between these advanced methods and the pure mathematical methods. The comparison is based on the performance of the two types of methods under physics education settings. In particular, the comparison uses both physics content assessments and scientific ability assessments. The dissertation includes three

  6. On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification

    NASA Astrophysics Data System (ADS)

    Aygün, Eser; Oommen, B. John; Cataltepe, Zehra

    Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.

  7. Stomatal regulation based on competition for water, stochastic rainfall, and xylem hydraulic vulnerability - a new theoretical model

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Duursma, R.; Farrior, C.; Medlyn, B. E.

    2016-12-01

    Stomata control the exchange of soil water for atmospheric CO2, which is one of the most important resource trade-offs for plants. This trade-off has been studied a lot but not in the context of competition. Based on the theory of evolutionarily stable strategy, we search for the uninvadable (or the ESS) response of stomatal conductance to soil water content under stochastic rainfall, with which the dominant plant population should never be invaded by any rare mutants in the water competition due to a higher fitness. In this study, we define the fitness as the difference between the long-term average photosynthetic carbon gain and a carbon cost of stomatal opening. This cost has traditionally been considered an unknown constant. Here we extend this framework by assuming it as the energy required for xylem embolism refilling. With regard to the refilling process, we explore 2 questions 1) to what extent the embolized xylem vessels can be repaired via refilling; and 2) whether this refilling is immediate or has a time delay following the formation of xylem embolism. We compare various assumptions in a total of 5 scenarios and find that the ESS exists only if the xylem damage can be repaired completely. Then, with this ESS, we estimate annual vegetation photosynthesis and water consumption and compare them with empirical results. In conclusion, this study provides a different insight from the existing empirical and mechanistic models as well as the theoretical models based on the optimization theory. In addition, as the model result is a simple quantitative relation between stomatal conductance and soil water content, it can be easily incorporated into other vegetation function models.

  8. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    NASA Astrophysics Data System (ADS)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  9. General Machine Learning Model, Review, and Experimental-Theoretic Study of Magnolol Activity in Enterotoxigenic Induced Oxidative Stress.

    PubMed

    Deng, Yanli; Liu, Yong; Tang, Shaoxun; Zhou, Chuanshe; Han, Xuefeng; Xiao, Wenjun; Pastur-Romay, Lucas Anton; Vazquez-Naya, Jose Manuel; Loureiro, Javier Pereira; Munteanu, Cristian R; Tan, Zhiliang

    2017-01-01

    This study evaluated the antioxidative effects of magnolol based on the mouse model induced by Enterotoxigenic Escherichia coli (E. coli, ETEC). All experimental mice were equally treated with ETEC suspensions (3.45×109 CFU/ml) after oral administration of magnolol for 7 days at the dose of 0, 100, 300 and 500 mg/kg Body Weight (BW), respectively. The oxidative metabolites and antioxidases for each sample (organism of mouse) were determined: Malondialdehyde (MDA), Nitric Oxide (NO), Glutathione (GSH), Myeloperoxidase (MPO), Catalase (CAT), Superoxide Dismutase (SOD), and Glutathione Peroxidase (GPx). In addition, we also determined the corresponding mRNA expressions of CAT, SOD and GPx as well as the Total Antioxidant Capacity (T-AOC). The experiment was completed with a theoretical study that predicts a series of 79 ChEMBL activities of magnolol with 47 proteins in 18 organisms using a Quantitative Structure- Activity Relationship (QSAR) classifier based on the Moving Averages (MAs) of Rcpi descriptors in three types of experimental conditions (biological activity with specific units, protein target and organisms). Six Machine Learning methods from Weka software were tested and the best QSAR classification model was provided by Random Forest with True Positive Rate (TPR) of 0.701 and Area under Receiver Operating Characteristic (AUROC) of 0.790 (test subset, 10-fold crossvalidation). The model is predicting if the new ChEMBL activities are greater or lower than the average values for the magnolol targets in different organisms. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Thermal conductivity of molten salt mixtures: Theoretical model supported by equilibrium molecular dynamics simulations.

    PubMed

    Gheribi, Aïmen E; Chartrand, Patrice

    2016-02-28

    A theoretical model for the description of thermal conductivity of molten salt mixtures as a function of composition and temperature is presented. The model is derived by considering the classical kinetic theory and requires, for its parametrization, only information on thermal conductivity of pure compounds. In this sense, the model is predictive. For most molten salt mixtures, no experimental data on thermal conductivity are available in the literature. This is a hindrance for many industrial applications (in particular for thermal energy storage technologies) as well as an obvious barrier for the validation of the theoretical model. To alleviate this lack of data, a series of equilibrium molecular dynamics (EMD) simulations has been performed on several molten chloride systems in order to determine their thermal conductivity in the entire range of composition at two different temperatures: 1200 K and 1300 K. The EMD simulations are first principles type, as the potentials used to describe the interactions have been parametrized on the basis of first principle electronic structure calculations. In addition to the molten chlorides system, the model predictions are also compared to a recent similar EMD study on molten fluorides and with the few reliable experimental data available in the literature. The accuracy of the proposed model is within the reported numerical and/or experimental errors.

  11. Experimental and theoretical study of magnetohydrodynamic ship models.

    PubMed

    Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.

  12. Experimental and theoretical study of magnetohydrodynamic ship models

    PubMed Central

    Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques. PMID:28665941

  13. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  14. Praxis and reflexivity for interprofessional education: towards an inclusive theoretical framework for learning.

    PubMed

    Hutchings, Maggie; Scammell, Janet; Quinney, Anne

    2013-09-01

    While there is growing evidence of theoretical perspectives adopted in interprofessional education, learning theories tend to foreground the individual, focusing on psycho-social aspects of individual differences and professional identity to the detriment of considering social-structural factors at work in social practices. Conversely socially situated practice is criticised for being context-specific, making it difficult to draw generalisable conclusions for improving interprofessional education. This article builds on a theoretical framework derived from earlier research, drawing on the dynamics of Dewey's experiential learning theory and Archer's critical realist social theory, to make a case for a meta-theoretical framework enabling social-constructivist and situated learning theories to be interlinked and integrated through praxis and reflexivity. Our current analysis is grounded in an interprofessional curriculum initiative mediated by a virtual community peopled by health and social care users. Student perceptions, captured through quantitative and qualitative data, suggest three major disruptive themes, creating opportunities for congruence and disjuncture and generating a model of zones of interlinked praxis associated with professional differences and identity, pedagogic strategies and technology-mediated approaches. This model contributes to a framework for understanding the complexity of interprofessional learning and offers bridges between individual and structural factors for engaging with the enablements and constraints at work in communities of practice and networks for interprofessional education.

  15. Representing general theoretical concepts in structural equation models: The role of composite variables

    USGS Publications Warehouse

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  16. Genetic constraints on adaptation: a theoretical primer for the genomics era.

    PubMed

    Connallon, Tim; Hall, Matthew D

    2018-06-01

    Genetic constraints are features of inheritance systems that slow or prohibit adaptation. Several population genetic mechanisms of constraint have received sustained attention within the field since they were first articulated in the early 20th century. This attention is now reflected in a rich, and still growing, theoretical literature on the genetic limits to adaptive change. In turn, empirical research on constraints has seen a rapid expansion over the last two decades in response to changing interests of evolutionary biologists, along with new technologies, expanding data sets, and creative analytical approaches that blend mathematical modeling with genomics. Indeed, one of the most notable and exciting features of recent progress in genetic constraints is the close connection between theoretical and empirical research. In this review, we discuss five major population genetic contexts of genetic constraint: genetic dominance, pleiotropy, fitness trade-offs between types of individuals of a population, sign epistasis, and genetic linkage between loci. For each, we outline historical antecedents of the theory, specific contexts where constraints manifest, and their quantitative consequences for adaptation. From each of these theoretical foundations, we discuss recent empirical approaches for identifying and characterizing genetic constraints, each grounded and motivated by this theory, and outline promising areas for future work. © 2018 New York Academy of Sciences.

  17. A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.

    PubMed

    Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R

    2018-01-01

    There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.

  18. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  19. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    PubMed

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    PubMed

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  1. Comparison of statistical and theoretical habitat models for conservation planning: the benefit of ensemble prediction

    Treesearch

    D. Todd Jones-Farrand; Todd M. Fearer; Wayne E. Thogmartin; Frank R. Thompson; Mark D. Nelson; John M. Tirpak

    2011-01-01

    Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and...

  2. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  3. Quantitative phenomenological model of the BOLD contrast mechanism

    NASA Astrophysics Data System (ADS)

    Dickson, John D.; Ash, Tom W. J.; Williams, Guy B.; Sukstanskii, Alexander L.; Ansorge, Richard E.; Yablonskiy, Dmitriy A.

    2011-09-01

    Different theoretical models of the BOLD contrast mechanism are used for many applications including BOLD quantification (qBOLD) and vessel size imaging, both in health and disease. Each model simplifies the system under consideration, making approximations about the structure of the blood vessel network and diffusion of water molecules through inhomogeneities in the magnetic field created by deoxyhemoglobin-containing blood vessels. In this study, Monte-Carlo methods are used to simulate the BOLD MR signal generated by diffusing water molecules in the presence of long, cylindrical blood vessels. Using these simulations we introduce a new, phenomenological model that is far more accurate over a range of blood oxygenation levels and blood vessel radii than existing models. This model could be used to extract physiological parameters of the blood vessel network from experimental data in BOLD-based experiments. We use our model to establish ranges of validity for the existing analytical models of Yablonskiy and Haacke, Kiselev and Posse, Sukstanskii and Yablonskiy (extended to the case of arbitrary time in the spin echo sequence) and Bauer et al. (extended to the case of randomly oriented cylinders). Although these models are shown to be accurate in the limits of diffusion under which they were derived, none of them is accurate for the whole physiological range of blood vessels radii and blood oxygenation levels. We also show the extent of systematic errors that are introduced due to the approximations of these models when used for BOLD signal quantification.

  4. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    NASA Astrophysics Data System (ADS)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  5. Naturalness of unknown physics: Theoretical models and experimental signatures

    NASA Astrophysics Data System (ADS)

    Kilic, Can

    In the last few decades collider experiments have not only spectacularly confirmed the predictions of the Standard Model but also have not revealed any direct evidence for new physics beyond the SM, which has led theorists to devise numerous models where the new physics couples weakly to the SM or is simply beyond the reach of past experiments. While phenomenologically viable, many such models appear finely tuned, even contrived. This work illustrates three attempts at coming up with explanations to fine-tunings we observe in the world around us, such as the gauge hierarchy problem or the cosmological constant problem, emphasizing both the theoretical aspects of model building as well as possible experimental signatures. First we investigate the "Little Higgs" mechanism and work on a specifical model, the "Minimal Moose" to highlight its impact on precision observables in the SM, and illustrate that it does not require implausible fine-tuning. Next we build a supersymmetric model, the "Fat Higgs", with an extended gauge structure which becomes confining. This model, aside from naturally preserving the unification of the SM gauge couplings at high energies, also makes it possible to evade the bounds on the lightest Higgs boson mass which are quite restrictive in minimal SUSY scenarios. Lastly we take a look at a possible resolution of the cosmological constant problem through the mechanism of "Ghost Condensation" and dwell on astrophysical observables from the Lorentz Violating sector in this model. We use current experimental data to constrain the coupling of this sector to the SM.

  6. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    PubMed Central

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  7. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  8. Towards a quantitative understanding of stem cell-niche interaction: experiments, models, and technologies.

    PubMed

    Roeder, Ingo; Loeffler, Markus; Glauche, Ingmar

    2011-04-15

    Here we report about an interdisciplinary workshop focusing on the effects of the local growth-environment on the regulation of stem cell development. Under the title "Towards a quantitative understanding of stem cell/ niche interaction: Experiments, models, and technologies", 33 experts from eight countries discussed current knowledge, new experimental and theoretical results as well as innovative measurement technologies. Specifically, the workshop addressed the following questions: What defines a stem cell niche? What are functional/regulatory characteristics of stem cell- microenvironment interactions? What experimental systems and technologies for quantifying niche function are available? As a consensus result it was recorded that there is no unique niche architecture across tissues but that there are generic principles of niche organization guaranteeing a proper function of stem cells. This functional aspect, as the major defining criterion, leads to the conclusion that stem cells and their niches need to be considered as an inseparable pair with implications for their experimental assessment: To be able to study any of those two components, the other component has to be accounted for. In this context, a number of classical in vitro assays using co-cultures of stem and stroma cells, but also new, specifically bioengineered culture systems have been discussed with respect to their advantages and disadvantages. Finally, there was a general agreement that the comprehensive understanding of niche-mediated stem cell regulation will, due to the complexity of involved mechanisms, require an interdisciplinary, systems biological approach. In addition to cell and molecular biology, biochemistry, biophysics and bioengineering also bioinformatics and mathematical modeling will play a major role in the future of this field. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    PubMed

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  10. Developing a theoretical model and questionnaire survey instrument to measure the success of electronic health records in residential aged care

    PubMed Central

    Yu, Ping; Qian, Siyu

    2018-01-01

    Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables—training, self-efficacy, system quality and information quality—on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time. PMID:29315323

  11. Theoretical models for the combustion of alloyable materials

    NASA Astrophysics Data System (ADS)

    Armstrong, Robert

    1992-09-01

    The purpose of this work is to extend a theoretical model of layered (laminar) media for SHS combustion presented in an earlier article [1] to explore possible mechanisms for after-burning in SHS ( i.e., gasless) combustion. As before, our particular interest is how the microscopic geometry of the solid reactants is reflected in the combustion wave and in the reaction product. The model is constructed from alternating lamina of two pure reactants that interdiffuse exothermically to form a product. Here, the laminar model is extended to contain layers of differing thicknesses. Using asymptotic theory, it was found that under certain conditions, the combustion wave can become “detached,” and an initial thin flame propagates through the media, leaving a slower, thicker flame following behind ( i.e., afterburning). Thin laminae are consumed in the initial flame and are thick in the secondary. The thin flame has a width determined by the inverse of the activation energy of diffusion, as found previously. The width of the afterburning zone, however, is determined by the absolute time of diffusion for the thicker laminae. Naturally, when the laminae are all the same thickness, there is only one thin flame. The condition for the appearance of afterburning is found to be contingent on the square of the ratio of smallestto-largest thicknesses being considerably less than unity.

  12. Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection

    ERIC Educational Resources Information Center

    Sawyer, Richard L.; And Others

    1976-01-01

    This article examines some of the values that might be considered in a selection situation within the context of a decision theoretic model also described here. Several alternate expressions of fair selection are suggested in the form of utility statements in which these values can be understood and compared. (Author/DEP)

  13. Modeling of rolling element bearing mechanics. Theoretical manual

    NASA Technical Reports Server (NTRS)

    Merchant, David H.; Greenhill, Lyn M.

    1994-01-01

    This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  14. Quantifying the transport properties of lipid mesophases by theoretical modelling of diffusion experiments

    NASA Astrophysics Data System (ADS)

    Antognini, Luca M.; Assenza, Salvatore; Speziale, Chiara; Mezzenga, Raffaele

    2016-08-01

    Lyotropic Liquid Crystals (LLCs) are a class of lipid-based membranes with a strong potential for drug-delivery employment. The characterization and control of their transport properties is a central issue in this regard, and has recently prompted a notable volume of research on the topic. A promising experimental approach is provided by the so-called diffusion setup, where the drug molecules diffuse from a feeding chamber filled with water to a receiving one passing through a LLC. In the present work we provide a theoretical framework for the proper description of this setup, and validate it by means of targeted experiments. Due to the inhomogeneity of the system, a rich palette of different diffusion dynamics emerges from the interplay of the different time- and lengthscales thereby present. Our work paves the way to the employment of diffusion experiments to quantitatively characterize the transport properties of LLCs, and provides the basic tools for device diffusion setups with controlled kinetic properties.

  15. Theoretical Models of Comprehension Skills Tested through a Comprehension Assessment Battery for Primary School Children

    ERIC Educational Resources Information Center

    Tobia, Valentina; Ciancaleoni, Matteo; Bonifacci, Paola

    2017-01-01

    In this study, two alternative theoretical models were compared, in order to analyze which of them best explains primary school children's text comprehension skills. The first one was based on the distinction between two types of answers requested by the comprehension test: local or global. The second model involved texts' input modality: written…

  16. Patients’ Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test

    PubMed Central

    Dou, Kaili; Yu, Ping; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-01-01

    Background Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. Objective The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients’ acceptance of smartphone health technology for chronic disease management. Methods Multiple theories and factors that may influence patients’ acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients’ acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients’ actual use of a smartphone health app. The partial least square method was used to test the theoretical model. Results The model accounted for .412 of the variance in patients’ intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients’ smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients’ intentions to use the technology. Age and gender had no significant influence on patients’ acceptance of smartphone technology. The study also

  17. [A novel approach to NIR spectral quantitative analysis: semi-supervised least-squares support vector regression machine].

    PubMed

    Li, Lin; Xu, Shuo; An, Xin; Zhang, Lu-Da

    2011-10-01

    In near infrared spectral quantitative analysis, the precision of measured samples' chemical values is the theoretical limit of those of quantitative analysis with mathematical models. However, the number of samples that can obtain accurately their chemical values is few. Many models exclude the amount of samples without chemical values, and consider only these samples with chemical values when modeling sample compositions' contents. To address this problem, a semi-supervised LS-SVR (S2 LS-SVR) model is proposed on the basis of LS-SVR, which can utilize samples without chemical values as well as those with chemical values. Similar to the LS-SVR, to train this model is equivalent to solving a linear system. Finally, the samples of flue-cured tobacco were taken as experimental material, and corresponding quantitative analysis models were constructed for four sample compositions' content(total sugar, reducing sugar, total nitrogen and nicotine) with PLS regression, LS-SVR and S2 LS-SVR. For the S2 LS-SVR model, the average relative errors between actual values and predicted ones for the four sample compositions' contents are 6.62%, 7.56%, 6.11% and 8.20%, respectively, and the correlation coefficients are 0.974 1, 0.973 3, 0.923 0 and 0.948 6, respectively. Experimental results show the S2 LS-SVR model outperforms the other two, which verifies the feasibility and efficiency of the S2 LS-SVR model.

  18. Quantitative computational models of molecular self-assembly in systems biology

    PubMed Central

    Thomas, Marcus; Schwartz, Russell

    2017-01-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally. PMID:28535149

  19. Quantitative computational models of molecular self-assembly in systems biology.

    PubMed

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  20. Corequisite Model: An Effective Strategy for Remediation in Freshmen Level Quantitative Reasoning Course

    ERIC Educational Resources Information Center

    Kashyap, Upasana; Mathew, Santhosh

    2017-01-01

    The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…

  1. A model-based analysis of a display for helicopter landing approach. [control theoretical model of human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Wheat, L. W.

    1975-01-01

    A control theoretic model of the human pilot was used to analyze a baseline electronic cockpit display in a helicopter landing approach task. The head down display was created on a stroke written cathode ray tube and the vehicle was a UH-1H helicopter. The landing approach task consisted of maintaining prescribed groundspeed and glideslope in the presence of random vertical and horizontal turbulence. The pilot model was also used to generate and evaluate display quickening laws designed to improve pilot vehicle performance. A simple fixed base simulation provided comparative tracking data.

  2. Nursing theory and concept development: a theoretical model of clinical nurses' intentions to stay in their current positions.

    PubMed

    Cowden, Tracy L; Cummings, Greta G

    2012-07-01

    We describe a theoretical model of staff nurses' intentions to stay in their current positions. The global nursing shortage and high nursing turnover rate demand evidence-based retention strategies. Inconsistent study outcomes indicate a need for testable theoretical models of intent to stay that build on previously published models, are reflective of current empirical research and identify causal relationships between model concepts. Two systematic reviews of electronic databases of English language published articles between 1985-2011. This complex, testable model expands on previous models and includes nurses' affective and cognitive responses to work and their effects on nurses' intent to stay. The concepts of desire to stay, job satisfaction, joy at work, and moral distress are included in the model to capture the emotional response of nurses to their work environments. The influence of leadership is integrated within the model. A causal understanding of clinical nurses' intent to stay and the effects of leadership on the development of that intention will facilitate the development of effective retention strategies internationally. Testing theoretical models is necessary to confirm previous research outcomes and to identify plausible sequences of the development of behavioral intentions. Increased understanding of the causal influences on nurses' intent to stay should lead to strategies that may result in higher retention rates and numbers of nurses willing to work in the health sector. © 2012 Blackwell Publishing Ltd.

  3. Model of twelve properties of a set of organic solvents with graph-theoretical and/or experimental parameters.

    PubMed

    Pogliani, Lionello

    2010-01-30

    Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.

  4. Quantitative analysis of factors that affect oil pipeline network accident based on Bayesian networks: A case study in China

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan

    2018-06-01

    Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.

  5. Conceptual Diversity, Moderators, and Theoretical Issues in Quantitative Studies of Cultural Capital Theory

    ERIC Educational Resources Information Center

    Tan, Cheng Yong

    2017-01-01

    The present study reviewed quantitative empirical studies examining the relationship between cultural capital and student achievement. Results showed that researchers had conceptualized and measured cultural capital in different ways. It is argued that the more holistic understanding of the construct beyond highbrow cultural consumption must be…

  6. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  7. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  8. Theoretical Model of Electrode Polarization and AC Electroosmotic Fluid Flow in Planar Electrode Arrays.

    PubMed

    Scott, Matthew; Kaler, Karan V. I. S.; Paul, Reginald

    2001-06-15

    Strong frequency-dependent fluid flow has been observed near the surface of microelectrode arrays. Modeling this phenomenon has proven to be difficult, with existing theories unable to account for the qualitative trend observed in the frequency spectra of this flow. Using recent electrode polarization results, a more comprehensive model of the double layer on the electrode surface is used to obtain good theoretical agreement with experimental data. Copyright 2001 Academic Press.

  9. Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Pandarese, G.; Cavuto, A.

    2012-06-01

    The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.

  10. Theoretical and numerical study of axisymmetric lattice Boltzmann models

    NASA Astrophysics Data System (ADS)

    Huang, Haibo; Lu, Xi-Yun

    2009-07-01

    The forcing term in the lattice Boltzmann equation (LBE) is usually used to mimic Navier-Stokes equations with a body force. To derive axisymmetric model, forcing terms are incorporated into the two-dimensional (2D) LBE to mimic the additional axisymmetric contributions in 2D Navier-Stokes equations in cylindrical coordinates. Many axisymmetric lattice Boltzmann D2Q9 models were obtained through the Chapman-Enskog expansion to recover the 2D Navier-Stokes equations in cylindrical coordinates [I. Halliday , Phys. Rev. E 64, 011208 (2001); K. N. Premnath and J. Abraham, Phys. Rev. E 71, 056706 (2005); T. S. Lee, H. Huang, and C. Shu, Int. J. Mod. Phys. C 17, 645 (2006); T. Reis and T. N. Phillips, Phys. Rev. E 75, 056703 (2007); J. G. Zhou, Phys. Rev. E 78, 036701 (2008)]. The theoretical differences between them are discussed in detail. Numerical studies were also carried out by simulating two different flows to make a comparison on these models’ accuracy and τ sensitivity. It is found all these models are able to obtain accurate results and have the second-order spatial accuracy. However, the model C [J. G. Zhou, Phys. Rev. E 78, 036701 (2008)] is the most stable one in terms of τ sensitivity. It is also found that if density of fluid is defined in its usual way and not directly relevant to source terms, the lattice Boltzmann model seems more stable.

  11. Theoretical and computer models of detonation in solid explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarver, C.M.; Urtiew, P.A.

    1997-10-01

    Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states,more » which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.« less

  12. Quantitative structure-retention relationship models for the prediction of the reversed-phase HPLC gradient retention based on the heuristic method and support vector machine.

    PubMed

    Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide

    2009-01-01

    The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.

  13. Quantitative Testing of Bedrock Incision Models, Clearwater River, WA

    NASA Astrophysics Data System (ADS)

    Tomkin, J. H.; Brandon, M.; Pazzaglia, F.; Barbour, J.; Willet, S.

    2001-12-01

    The topographic evolution of many active orogens is dominated by the process of bedrock channel incision. Several incision models based around the detachment limited shear-stress model (or stream power model) which employs an area (A) and slope (S) power law (E = K Sn Am) have been proposed to explain this process. They require quantitative assessment. We evaluate the proposed incision models by comparing their predictions with observations obtained from a river in a tectonically active mountain range: the Clearwater River in northwestern Washington State. Previous work on river terraces along the Clearwater have provided long-term incision rates for the river, and in conjunction with previous fission track studies it has also been determined that there is a long-term balance between river incision and rock uplift. This steady-state incision rate data allows us, through the use of inversion methods and statistical tests, to determine the applicability of the different incision models for the Clearwater. None of the models successfully explain the observations. This conclusion particularly applies to the commonly used detachment limited shear-stress model (or stream power model), which has a physically implausible best fit solution and systematic residuals for all the predicted combinations of m and n.

  14. A theoretical model describing the one-dimensional growth of single crystals on free sustained substrates

    NASA Astrophysics Data System (ADS)

    Ye, Ziran; Wang, Ke; Lu, Chenxi; Jin, Ying; Sui, Chenghua; Yan, Bo; Gao, Fan; Cai, Pinggen; Lv, Bin; Li, Yun; Chen, Naibo; Sun, Guofang; Xu, Fengyun; Ye, Gaoxiang

    2018-03-01

    We develop a theoretical model that interprets the growth mechanism of zinc (Zn) crystal nanorods on a liquid substrate by thermal evaporation. During deposition, Zn atoms diffuse randomly on an isotropic and quasi-free sustained substrate, the nucleation of the atoms results in the primary nanorod (or seed crystal) growth. Subsequently, a characteristic one-dimensional atomic aggregation is proposed, which leads to the accelerating growth of the crystal nanorod along its preferential growth direction until the growth terminates. The theoretical results are in good agreement with the experimental findings.

  15. Theoretical calculations of physico-chemical and spectroscopic properties of bioinorganic systems: current limits and perspectives.

    PubMed

    Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír

    2012-05-21

    In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.

  16. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  17. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  18. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  19. Some theoretical models and constructs generic to substance abuse prevention programs for adolescents: possible relevance and limitations for problem gambling.

    PubMed

    Evans, Richard I

    2003-01-01

    For the past several years the author and his colleagues have explored the area of how social psychological constructs and theoretical models can be applied to the prevention of health threatening behaviors in adolescents. In examining the need for the development of gambling prevention programs for adolescents, it might be of value to consider the application of such constructs and theoretical models as a foundation to the development of prevention programs in this emerging problem behavior among adolescents. In order to provide perspective to the reader, the present paper reviews the history of various psychosocial models and constructs generic to programs directed at prevention of substance abuse in adolescents. A brief history of some of these models, possibly most applicable to gambling prevention programs, are presented. Social inoculation, reasoned action, planned behavior, and problem behavior theory, are among those discussed. Some deficits of these models, are also articulated. How such models may have relevance to developing programs for prevention of problem gambling in adolescents is also discussed. However, the inherent differences between gambling and more directly health threatening behaviors such as substance abuse must, of course, be seriously considered in utilizing such models. Most current gambling prevention programs have seldom been guided by theoretical models. Developers of gambling prevention programs should consider theoretical foundations, particularly since such foundations not only provide a guide for programs, but may become critical tools in evaluating their effectiveness.

  20. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  1. Consideration of the Aluminum Distribution in Zeolites in Theoretical and Experimental Catalysis Research

    DOE PAGES

    Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.; ...

    2017-12-11

    Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less

  2. Consideration of the Aluminum Distribution in Zeolites in Theoretical and Experimental Catalysis Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.

    Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less

  3. Hospital nurses' wellbeing at work: a theoretical model.

    PubMed

    Utriainen, Kati; Ala-Mursula, Leena; Kyngäs, Helvi

    2015-09-01

    To develop a theoretical model of hospital nurses' wellbeing at work. The concept of wellbeing at work is presented without an exact definition and without considering different contents. A model was developed in a deductive manner and empirical data collected from nurses (n = 233) working in a university hospital. Explorative factor analysis was used. The main concepts were: patients' experience of high-quality care; assistance and support among nurses; nurses' togetherness and cooperation; fluent practical organisation of work; challenging and meaningful work; freedom to express diverse feelings in the work community; well-conducted everyday nursing; status related to the work itself; fair and supportive leadership; opportunities for professional development; fluent communication with other professionals; and being together with other nurses in an informal way. Themes included: collegial relationships; enhancing high-quality patient care; supportive and fair leadership; challenging, meaningful and well organised work; and opportunities for professional development. Object-dependent wellbeing was supported. Managers should focus on strengthening the positive aspect of wellbeing at work, focusing on providing fluently organised work practices, fair and supportive leadership and togetherness while allowing nurses to implement their own ideas and promote the experience of meaningfulness. © 2014 John Wiley & Sons Ltd.

  4. Hash Functions and Information Theoretic Security

    NASA Astrophysics Data System (ADS)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  5. Models in Educational Administration: Revisiting Willower's "Theoretically Oriented" Critique

    ERIC Educational Resources Information Center

    Newton, Paul; Burgess, David; Burns, David P.

    2010-01-01

    Three decades ago, Willower (1975) argued that much of what we take to be theory in educational administration is in fact only theoretically oriented. If we accept Willower's assessment of the field as true, what implications does this statement hold for the academic study and practical application of the theoretically oriented aspects of our…

  6. Theoretical model of hardness anisotropy in brittle materials

    NASA Astrophysics Data System (ADS)

    Gao, Faming

    2012-07-01

    Anisotropy is prominent in the hardness test of single crystals. However, the anisotropic nature is not demonstrated quantitatively in previous hardness model. In this work, it is found that the electron transition energy per unit volume in the glide region and the orientation of glide region play critical roles in determining hardness value and hardness anisotropy for a single crystal material. We express the mathematical definition of hardness anisotropy through simple algebraic relations. The calculated Knoop hardnesses of the single crystals are in good agreement with observations. This theory, extended to polycrystalline materials by including hall-petch effect and quantum size effect, predicts that the polycrystalline diamond with low angle grain boundaries can be harder than single-crystal bulk diamond. Combining first-principles technique and the formula of hardness anisotropy the hardness of monoclinic M-carbon, orthorhombic W-carbon, Z-carbon, and T-carbon are predicted.

  7. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  8. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  9. Experimental Control of Simple Pendulum Model

    ERIC Educational Resources Information Center

    Medina, C.

    2004-01-01

    This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…

  10. Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage.

    PubMed

    Xia, Runchuan; Zhou, Jianting; Zhang, Hong; Liao, Leng; Zhao, Ruiqiang; Zhang, Zeyu

    2018-05-02

    This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL) signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model) were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values ( B xL ( x,z ) curves) agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.

  11. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  13. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  14. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  15. Patients' Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test.

    PubMed

    Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-12-06

    Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use

  16. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  17. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    PubMed

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (p<0.05). However, the V e values decreased significantly only at week 9 (p=0.032), and no difference in the K ep was found between two groups. The BMD values of the OVX group decreased significantly compared with those of the control group from week 3 (p<0.05). Transmission electron microscopy showed tighter gaps between vascular endothelial cells with swollen mitochondria

  18. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  19. Theoretical study on removal rate and surface roughness in grinding a RB-SiC mirror with a fixed abrasive.

    PubMed

    Wang, Xu; Zhang, Xuejun

    2009-02-10

    This paper is based on a microinteraction principle of fabricating a RB-SiC material with a fixed abrasive. The influence of the depth formed on a RB-SiC workpiece by a diamond abrasive on the material removal rate and the surface roughness of an optical component are quantitatively discussed. A mathematical model of the material removal rate and the simulation results of the surface roughness are achieved. In spite of some small difference between the experimental results and the theoretical anticipation, which is predictable, the actual removal rate matches the theoretical prediction very well. The fixed abrasive technology's characteristic of easy prediction is of great significance in the optical fabrication industry, so this brand-new fixed abrasive technology has wide application possibilities.

  20. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    PubMed

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  1. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  2. A general nonlinear magnetomechanical model for ferromagnetic materials under a constant weak magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn; Jin, Ke

    2016-04-14

    Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and themore » approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.« less

  3. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  4. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  5. GHRS observations and theoretical modeling of early type stars in R136a

    NASA Astrophysics Data System (ADS)

    de Koter, A.; Heap, S.; Hubeny, I.; Lanz, T.; Hutchings, J.; Lamers, H. J. G. L. M.; Maran, S.; Schmutz, W.

    1994-05-01

    We present the first spectroscopic observations of individual stars in R136a, the most dense part of the starburst cluster 30 Doradus in the LMC. Spectra of two stars are scheduled to be obtained with the GHRS on board the HST: R136a5, the brightest of the complex and R136a2, a Wolf-Rayet star of type WN. The 30 Doradus cluster is the only starburst region in which individual stars can be studied. Therefore, quantitative knowledge of the basic stellar parameters will yield valuable insight into the formation of massive stars in starbursts and into their subsequent evolution. Detailed modeling of the structure of the atmosphere and wind of these stars will also lead to a better understanding of the mechanism(s) that govern their dynamics. We present the first results of our detailed quantitative spectral analysis using state-of-the-art non-LTE model atmospheres for stars with extended and expanding atmospheres. The models are computed using the Improved-Sobolev Approximation wind code (ISA-WIND) of de Koter, Schmutz & Lamers (1993, A&A 277, 561), which has been extended to include C, N and Si. Our model computations are not based on the core-halo approximation, but use a unified treatment of the photosphere and wind. This approach is essential for Wolf-Rayet stars. Our synthetic spectra, dominated by the P Cygni profiles of the UV resonance lines, also account for the numerous weak metal lines of photospheric origin.

  6. Experimental and theoretical investigation of the first-order hyperpolarizability of a class of triarylamine derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Daniel L., E-mail: dlsilva.physics@gmail.com, E-mail: deboni@ifsc.usp.br; Instituto de Física, Universidade de São Paulo, CP 66318, 05314-970 São Paulo, SP; Fonseca, Ruben D.

    2015-02-14

    This paper reports on the static and dynamic first-order hyperpolarizabilities of a class of push-pull octupolar triarylamine derivatives dissolved in toluene. We have combined hyper-Rayleigh scattering experiment and the coupled perturbed Hartree-Fock method implemented at the Density Functional Theory (DFT) level of theory to determine the static and dynamic (at 1064 nm) first-order hyperpolarizability (β{sub HRS}) of nine triarylamine derivatives with distinct electron-withdrawing groups. In four of these derivatives, an azoaromatic unit is inserted and a pronounceable increase of the first-order hyperpolarizability is reported. Based on the theoretical results, the dipolar/octupolar character of the derivatives is determined. By using amore » polarizable continuum model in combination with the DFT calculations, it was found that although solvated in an aprotic and low dielectric constant solvent, due to solvent-induced polarization and the frequency dispersion effect, the environment substantially affects the first-order hyperpolarizability of all derivatives investigated. This statement is supported due to the solvent effects to be essential for the better agreement between theoretical results and experimental data concerning the dynamic first-order hyperpolarizability of the derivatives. The first-order hyperpolarizability of the derivatives was also modeled using the two- and three-level models, where the relationship between static and dynamic first hyperpolarizabilities is given by a frequency dispersion model. Using this approach, it was verified that the dynamic first hyperpolarizability of the derivatives is satisfactorily reproduced by the two-level model and that, in the case of the derivatives with an azoaromatic unit, the use of a damped few-level model is essential for, considering also the molecular size of such derivatives, a good quantitative agreement between theoretical results and experimental data to be observed.« less

  7. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  8. Theoretical Study on Stress Sensitivity of Fractal Porous Media with Irreducible Water

    NASA Astrophysics Data System (ADS)

    Lei, Gang; Dong, Zhenzhen; Li, Weirong; Wen, Qingzhi; Wang, Cai

    The couple flow deformation behavior in porous media has drawn tremendous attention in various scientific and engineering fields. However, though the coupled flow deformation mechanism has been intensively investigated in the last decades, the essential controls on stress sensitivity are not determined. It is of practical significance to use analytic methods to study stress sensitivity of porous media. Unfortunately, because of the disordered and extremely complicated microstructures of porous media, the theoretical model for stress sensitivity is scarce. The goal of this work is to establish a novel and reasonable quantitative model to determine the essential controls on stress sensitivity. The predictions of the theoretical model, derived from the Hertzian contact theory and fractal geometry, agree well with the available experimental data. Compared with the previous models, our model takes into account more factors, including the influence of the water saturation and the microstructural parameters of the pore space. The proposed models can reveal more mechanisms that affect the coupled flow deformation behavior in fractal porous media. The results show that the irreducible water saturation increases with the increase of effective stress, and decreases with the increased rock elastic modulus (or increased power law index) at a given effective stress. The effect of stress variation on porosity is smaller than that on permeability. Under a given effective stress, the normalized permeability (or the normalized porosity) becomes smaller with the decrease of rock elastic modulus (or the decrease of power law index). And a lower capillary pressure will correspond to an increased rock elastic modulus (or an increased power law index) under a given water saturation.

  9. Theoretical models for stellar X-ray polarization in compact objects

    NASA Technical Reports Server (NTRS)

    Meszaros, P.

    1991-01-01

    Degenerate stellar objects are expected to be strong sources of polarized X-ray emission. This is particularly true for strongly magnetized neutron stars, e.g. accretion or rotation powered pulsars, and gamma ray bursters. In these, linear polarization degrees well in excess of 30 percent are expected. Weaker magnetic field stellar sources, such as old neutron stars in low mass binary systems, white dwarfs and black holes are expected to have polarization degrees in the range 1-3 percent. A great interest attaches to the detection of polarization in these objects, since this would provide invaluable information concerning the geometry, radiation mechanism and magnetic field strength, necessary for testing and proving models of the structure and evolution of stars in their late stages. In this paper we review the theoretical models of the production of polarized radiation in compact stellar X-ray sources, and discuss the possibility of detecting these properties using currently planned detectors to be flown in space.

  10. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  11. Extension of nanoconfined DNA: Quantitative comparison between experiment and theory

    NASA Astrophysics Data System (ADS)

    Iarko, V.; Werner, E.; Nyberg, L. K.; Müller, V.; Fritzsche, J.; Ambjörnsson, T.; Beech, J. P.; Tegenfeldt, J. O.; Mehlig, K.; Westerlund, F.; Mehlig, B.

    2015-12-01

    The extension of DNA confined to nanochannels has been studied intensively and in detail. However, quantitative comparisons between experiments and model calculations are difficult because most theoretical predictions involve undetermined prefactors, and because the model parameters (contour length, Kuhn length, effective width) are difficult to compute reliably, leading to substantial uncertainties. Here we use a recent asymptotically exact theory for the DNA extension in the "extended de Gennes regime" that allows us to compare experimental results with theory. For this purpose, we performed experiments measuring the mean DNA extension and its standard deviation while varying the channel geometry, dye intercalation ratio, and ionic strength of the buffer. The experimental results agree very well with theory at high ionic strengths, indicating that the model parameters are reliable. At low ionic strengths, the agreement is less good. We discuss possible reasons. In principle, our approach allows us to measure the Kuhn length and the effective width of a single DNA molecule and more generally of semiflexible polymers in solution.

  12. Laboratory and theoretical models of planetary-scale instabilities and waves

    NASA Technical Reports Server (NTRS)

    Hart, John E.; Toomre, Juri

    1990-01-01

    Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.

  13. AI/OR computational model for integrating qualitative and quantitative design methods

    NASA Technical Reports Server (NTRS)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  14. Quantitative structure-property relationship (QSPR) modeling of drug-loaded polymeric micelles via genetic function approximation.

    PubMed

    Wu, Wensheng; Zhang, Canyang; Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments.

  15. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow

    NASA Astrophysics Data System (ADS)

    Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.

    2018-05-01

    In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.

  16. Theoretical vibro-acoustic modeling of acoustic noise transmission through aircraft windows

    NASA Astrophysics Data System (ADS)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-06-01

    In this paper, a fully vibro-acoustic model for sound transmission across a multi-pane aircraft window is developed. The proposed model is efficiently applied for a set of window models to perform extensive theoretical parametric studies. The studied window configurations generally simulate the passenger window designs of modern aircraft classes which have an exterior multi-Plexiglas pane, an interior single acrylic glass pane and a dimmable glass ("smart" glass), all separated by thin air cavities. The sound transmission loss (STL) characteristics of three different models, triple-, quadruple- and quintuple-paned windows identical in size and surface density, are analyzed for improving the acoustic insulation performances. Typical results describing the influence of several system parameters, such as the thicknesses, number and spacing of the window panes, on the transmission loss are then investigated. In addition, a comparison study is carried out to evaluate the acoustic reduction capability of each window model. The STL results show that the higher frequencies sound transmission loss performance can be improved by increasing the number of window panels, however, the low frequency performance is decreased, particularly at the mass-spring resonances.

  17. A Theoretical Manpower Optimization Model for the Air Force Installation Contracting Agency (AFICA)

    DTIC Science & Technology

    2017-12-01

    development and enterprise-wide market intelligence. The theoretical manpower model proposed by this project optimizes manpower in respect to contracting...procurement needs and/or more effectively leverage spend, market position, market knowledge (e.g., price benchmarks), and capabilities (e.g., IT...CONS level because the process savings are not clearly traceable to a contract action. For example, to augment the market intelligence of category

  18. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    PubMed

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  19. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    NASA Astrophysics Data System (ADS)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  20. Theoretical Models of Low-Resolution Microwave Rotational Spectra of Ethane- and Propanethiol

    NASA Astrophysics Data System (ADS)

    Kadjar, Ch. O.; Kazimova, S. B.; Hasanova, A. S.; Ismailzadeh, G. I.; Menzeleyev, M. R.

    2018-05-01

    Additive modeling of low-resolution microwave spectra of heteroisomeric substituted hydrocarbons produced theoretical spectra of ethanethiol and propanethiol in the range 0-2 THz with maxima at 465 ± 20 and 240 ± 20 GHz. More precise calculations in a narrow frequency band of these ranges used spectral line half-widths of 1.5, 0.8, and 0.5 MHz that modeled conditions in different layers of Earth's troposphere. The strongest extrema of the low-resolution spectra of the studied molecules were found at 486 ± 5, 446 ± 5, and 436 ± 5 (ethanethiol) and at 257 ± 5, 239 ± 5, and 234 ± 5 GHz (propanethiol). Various aspects of the application of the results were discussed.

  1. Models of Pre-Service Teachers' Academic Achievement: The Influence of Cognitive Motivational Variables

    ERIC Educational Resources Information Center

    Castro-Villarreal, Felicia; Guerra, Norma; Sass, Daniel; Hseih, Pei-Hsuan

    2014-01-01

    Theoretical models were tested using structural equation modeling to evaluate the interrelations among cognitive motivational variables and academic achievement using a sample of 128 predominately Hispanic pre-service teachers enrolled in two undergraduate educational psychology classes. Data were gathered using: (1) a quantitative questionnaire…

  2. Chemical and morphological gradient scaffolds to mimic hierarchically complex tissues: From theoretical modeling to their fabrication.

    PubMed

    Marrella, Alessandra; Aiello, Maurizio; Quarto, Rodolfo; Scaglione, Silvia

    2016-10-01

    Porous multiphase scaffolds have been proposed in different tissue engineering applications because of their potential to artificially recreate the heterogeneous structure of hierarchically complex tissues. Recently, graded scaffolds have been also realized, offering a continuum at the interface among different phases for an enhanced structural stability of the scaffold. However, their internal architecture is often obtained empirically and the architectural parameters rarely predetermined. The aim of this work is to offer a theoretical model as tool for the design and fabrication of functional and structural complex graded scaffolds with predicted morphological and chemical features, to overcome the time-consuming trial and error experimental method. This developed mathematical model uses laws of motions, Stokes equations, and viscosity laws to describe the dependence between centrifugation speed and fiber/particles sedimentation velocity over time, which finally affects the fiber packing, and thus the total porosity of the 3D scaffolds. The efficacy of the theoretical model was tested by realizing engineered graded grafts for osteochondral tissue engineering applications. The procedure, based on combined centrifugation and freeze-drying technique, was applied on both polycaprolactone (PCL) and collagen-type-I (COL) to test the versatility of the entire process. A functional gradient was combined to the morphological one by adding hydroxyapatite (HA) powders, to mimic the bone mineral phase. Results show that 3D bioactive morphologically and chemically graded grafts can be properly designed and realized in agreement with the theoretical model. Biotechnol. Bioeng. 2016;113: 2286-2297. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Toward a comprehensive, theoretical model of compassion fatigue: An integrative literature review.

    PubMed

    Coetzee, Siedine K; Laschinger, Heather K S

    2018-03-01

    This study was an integrative literature review in relation to compassion fatigue models, appraising these models, and developing a comprehensive theoretical model of compassion fatigue. A systematic search on PubMed, EbscoHost (Academic Search Premier, E-Journals, Medline, PsycINFO, Health Source Nursing/Academic Edition, CINAHL, MasterFILE Premier and Health Source Consumer Edition), gray literature, and manual searches of included reference lists was conducted in 2016. The studies (n = 11) were analyzed, and the strengths and limitations of the compassion fatigue models identified. We further built on these models through the application of the conservation of resources theory and the social neuroscience of empathy. The compassion fatigue model shows that it is not empathy that puts nurses at risk of developing compassion fatigue, but rather a lack of resources, inadequate positive feedback, and the nurse's response to personal distress. By acting on these three aspects, the risk of developing compassion fatigue can be addressed, which could improve the retention of a compassionate and committed nurse workforce. © 2017 John Wiley & Sons Australia, Ltd.

  4. Theoretical model of an optothermal microactuator directly driven by laser beams

    NASA Astrophysics Data System (ADS)

    Han, Xu; Zhang, Haijun; Xu, Rui; Wang, Shuying; Qin, Chun

    2015-07-01

    This paper proposes a novel method of optothermal microactuation based on single and dual laser beams (spots). The theoretical model of the optothermal temperature distribution of an expansion arm is established and simulated, indicating that the maximum temperature of the arm irradiated by dual laser spots, at the same laser power level, is much lower than that irradiated by one single spot, and thus the risk of burning out and damaging the optothermal microactuator (OTMA) can be effectively avoided. To verify the presented method, a 750 μm long OTMA with a 100 μm wide expansion arm is designed and microfabricated, and single/dual laser beams with a wavelength of 650 nm are adopted to carry out experiments. The experimental results showed that the optothermal deflection of the OTMA under the irradiation of dual laser spots is larger than that under the irradiation of a single spot with the same power, which is in accordance with theoretical prediction. This method of optothermal microactuation may expand the practical applications of microactuators, which serve as critical units in micromechanical devices and micro-opto-electro-mechanical systems (MOEMS).

  5. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    PubMed

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  6. [Cognitive Reserve Scale: testing the theoretical model and norms].

    PubMed

    Leon-Estrada, I; Garcia-Garcia, J; Roldan-Tapia, L

    2017-01-01

    The cognitive reserve theory may contribute to explain cognitive performance differences among individuals with similar cognitive decline and among healthy ones. However, more psychometric analysis are needed to guarantee the usage of tests for assessing cognitive reserve. To study validity evidences in relation to the structure of the Cognitive Reserve Scale (CRS) and to create reference norms to interpret the scores. A total of 172 participants completed the scale and they were classified into two age groups: aged 36-64 years (n = 110) and 65-88 years (n = 62). The exploratory factor analysis using ESEM revealed that the data fitted the proposed model. Overall, the discriminative indices were acceptable (between 0.21 and 0.50) and congruence was observed in the periods of young adulthood, adulthood and late adulthood, in both age group. Besides, the index of reliability (Cronbach's alpha: 0.80) and the typical mean error test (mean: 51.40 ± 11.11) showed adequate values for this type of instrument. The CRS seemed to be set under the hypothetical theoretical model, and the scores might be interpreted by the norms showed. This study provided guarantees for the usage of the CRS in research.

  7. A game theoretic model of drug launch in India.

    PubMed

    Bhaduri, Saradindu; Ray, Amit Shovon

    2006-01-01

    There is a popular belief that drug launch is delayed in developing countries like India because of delayed transfer of technology due to a 'post-launch' imitation threat through weak intellectual property rights (IPR). In fact, this belief has been a major reason for the imposition of the Trade Related Intellectual Property Rights regime under the WTO. This construct undermines the fact that in countries like India, with high reverse engineering capabilities, imitation can occur even before the formal technology transfer, and fails to recognize the first mover advantage in pharmaceutical markets. This paper argues that the first mover advantage is important and will vary across therapeutic areas, especially in developing countries with diverse levels of patient enlightenment and quality awareness. We construct a game theoretic model of incomplete information to examine the delay in drug launch in terms of costs and benefits of first move, assumed to be primarily a function of the therapeutic area of the new drug. Our model shows that drug launch will be delayed only for external (infective/communicable) diseases, while drugs for internal, non-communicable diseases (accounting for the overwhelming majority of new drug discovery) will be launched without delay.

  8. Quantitative aspects of vibratory mobilization and break-up of non-wetting fluids in porous media

    NASA Astrophysics Data System (ADS)

    Deng, Wen

    Seismic stimulation is a promising technology aimed to mobilize the entrapped non-wetting fluids in the subsurface. The applications include enhanced oil recovery or, alternatively, facilitation of movement of immiscible/partly-miscible gases far into porous media, for example, for CO2 sequestration. This work is devoted to detailed quantitative studies of the two basic pore-scale mechanisms standing behind seismic stimulation: the mobilization of bubbles or drops entrapped in pore constrictions by capillary forces and the break-up of continuous long bubbles or drops. In typical oil-production operations, oil is produced by the natural reservoir-pressure drive during the primary stage and by artificial water flooding at the secondary stage. Capillary forces act to retain a substantial residual fraction of reservoir oil even after water flooding. The seismic stimulation is an unconventional technology that serves to overcome capillary barriers in individual pores and liberate the entrapped oil by adding an oscillatory inertial forcing to the external pressure gradient. According to our study, the effect of seismic stimulation on oil mobilization is highly dependent on the frequencies and amplitudes of the seismic waves. Generally, the lower the frequency and the larger the amplitude, more effective is the mobilization. To describe the mobilization process, we developed two theoretical hydrodynamics-based models and justified both using computational fluid dynamics (CFD). Our theoretical models have a significant advantage over CFD in that they reduce the computational time significantly, while providing correct practical guidance regarding the required field parameters of vibroseismic stimulation, such as the amplitude and frequency of the seismic field. The models also provide important insights into the basic mechanisms governing the vibration-driven two-phase flow in constricted capillaries. In a waterflooded reservoir, oil can be recovered most efficiently by

  9. Comparison of Theoretical Stresses and Deflections of Multicell Wings with Experimental Results Obtained from Plastic Models

    NASA Technical Reports Server (NTRS)

    Zender, George W

    1956-01-01

    The experimental deflections and stresses of six plastic multicell-wing models of unswept, delta, and swept plan form are presented and compared with previously published theoretical results obtained by the electrical analog method. The comparisons indicate that the theory is reliable except for the evaluation of stresses in the vicinity of the leading edge of delta wings and the leading and trailing edges of swept wings. The stresses in these regions are questionable, apparently because of simplifications employed in idealizing the actual structure for theoretical purposes and because of local effects of concentrated loads.

  10. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  11. Evaluating quantitative and conceptual models of speech production: how does SLAM fare?

    PubMed

    Walker, Grant M; Hickok, Gregory

    2016-04-01

    In a previous publication, we presented a new computational model called SLAM (Walker & Hickok, Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0903 ), based on the hierarchical state feedback control (HSFC) theory (Hickok Nature Reviews Neuroscience, 13(2), 135-145, 2012). In his commentary, Goldrick (Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0946-9 ) claims that SLAM does not represent a theoretical advancement, because it cannot be distinguished from an alternative lexical + postlexical (LPL) theory proposed by Goldrick and Rapp (Cognition, 102(2), 219-260, 2007). First, we point out that SLAM implements a portion of a conceptual model (HSFC) that encompasses LPL. Second, we show that SLAM accounts for a lexical bias present in sound-related errors that LPL does not explain. Third, we show that SLAM's explanatory advantage is not a result of approximating the architectural or computational assumptions of LPL, since an implemented version of LPL fails to provide the same fit improvements as SLAM. Finally, we show that incorporating a mechanism that violates some core theoretical assumptions of LPL-making it more like SLAM in terms of interactivity-allows the model to capture some of the same effects as SLAM. SLAM therefore provides new modeling constraints regarding interactions among processing levels, while also elaborating on the structure of the phonological level. We view this as evidence that an integration of psycholinguistic, neuroscience, and motor control approaches to speech production is feasible and may lead to substantial new insights.

  12. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  13. Quantitative Finance

    NASA Astrophysics Data System (ADS)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  14. Theoretical and observational constraints on Tachyon Inflation

    NASA Astrophysics Data System (ADS)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel; Hidalgo, Juan Carlos; Rigel Mora-Luna, Refugio

    2018-03-01

    We constrain several models in Tachyonic Inflation derived from the large-N formalism by considering theoretical aspects as well as the latest observational data. On the theoretical side, we assess the field range of our models by means of the excursion of the equivalent canonical field. On the observational side, we employ BK14+PLANCK+BAO data to perform a parameter estimation analysis as well as a Bayesian model selection to distinguish the most favoured models among all four classes here presented. We observe that the original potential V propto sech(T) is strongly disfavoured by observations with respect to a reference model with flat priors on inflationary observables. This realisation of Tachyon inflation also presents a large field range which may demand further quantum corrections. We also provide examples of potentials derived from the polynomial and the perturbative classes which are both statistically favoured and theoretically acceptable.

  15. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  16. Quantitative theoretical analysis of lifetimes and decay rates relevant in laser cooling BaH

    NASA Astrophysics Data System (ADS)

    Moore, Keith; Lane, Ian C.

    2018-05-01

    Tiny radiative losses below the 0.1% level can prove ruinous to the effective laser cooling of a molecule. In this paper the laser cooling of a hydride is studied with rovibronic detail using ab initio quantum chemistry in order to document the decays to all possible electronic states (not just the vibrational branching within a single electronic transition) and to identify the most populated final quantum states. The effect of spin-orbit and associated couplings on the properties of the lowest excited states of BaH are analysed in detail. The lifetimes of the A2Π1/2, H2Δ3/2 and E2Π1/2 states are calculated (136 ns, 5.8 μs and 46 ns respectively) for the first time, while the theoretical value for B2 Σ1/2+ is in good agreement with experiments. Using a simple rate model the numbers of absorption-emission cycles possible for both one- and two-colour cooling on the competing electronic transitions are determined, and it is clearly demonstrated that the A2Π - X2Σ+ transition is superior to B2Σ+ - X2Σ+ , where multiple tiny decay channels degrade its efficiency. Further possible improvements to the cooling method are proposed.

  17. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  18. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  19. Quantitative Structure-Property Relationship (QSPR) Modeling of Drug-Loaded Polymeric Micelles via Genetic Function Approximation

    PubMed Central

    Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923

  20. Predicting phenolic acid absorption in Caco-2 cells: a theoretical permeability model and mechanistic study.

    PubMed

    Farrell, Tracy L; Poquet, Laure; Dew, Tristan P; Barber, Stuart; Williamson, Gary

    2012-02-01

    There is a considerable need to rationalize the membrane permeability and mechanism of transport for potential nutraceuticals. The aim of this investigation was to develop a theoretical permeability equation, based on a reported descriptive absorption model, enabling calculation of the transcellular component of absorption across Caco-2 monolayers. Published data for Caco-2 permeability of 30 drugs transported by the transcellular route were correlated with the descriptors 1-octanol/water distribution coefficient (log D, pH 7.4) and size, based on molecular mass. Nonlinear regression analysis was used to derive a set of model parameters a', β', and b' with an integrated molecular mass function. The new theoretical transcellular permeability (TTP) model obtained a good fit of the published data (R² = 0.93) and predicted reasonably well (R² = 0.86) the experimental apparent permeability coefficient (P(app)) for nine non-training set compounds reportedly transported by the transcellular route. For the first time, the TTP model was used to predict the absorption characteristics of six phenolic acids, and this original investigation was supported by in vitro Caco-2 cell mechanistic studies, which suggested that deviation of the P(app) value from the predicted transcellular permeability (P(app)(trans)) may be attributed to involvement of active uptake, efflux transporters, or paracellular flux.

  1. A theoretical model for analysing gender bias in medicine.

    PubMed

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  2. Team Resilience as a Second-Order Emergent State: A Theoretical Model and Research Directions

    PubMed Central

    Bowers, Clint; Kreutzer, Christine; Cannon-Bowers, Janis; Lamb, Jerry

    2017-01-01

    Resilience has been recognized as an important phenomenon for understanding how individuals overcome difficult situations. However, it is not only individuals who face difficulties; it is not uncommon for teams to experience adversity. When they do, they must be able to overcome these challenges without performance decrements.This manuscript represents a theoretical model that might be helpful in conceptualizing this important construct. Specifically, it describes team resilience as a second-order emergent state. We also include research propositions that follow from the model. PMID:28861013

  3. On the Road to Translation for PTSD Treatment: Theoretical and Practical Considerations of the Use of Human Models of Conditioned Fear for Drug Development.

    PubMed

    Risbrough, Victoria B; Glenn, Daniel E; Baker, Dewleen G

    The use of quantitative, laboratory-based measures of threat in humans for proof-of-concept studies and target development for novel drug discovery has grown tremendously in the last 2 decades. In particular, in the field of posttraumatic stress disorder (PTSD), human models of fear conditioning have been critical in shaping our theoretical understanding of fear processes and importantly, validating findings from animal models of the neural substrates and signaling pathways required for these complex processes. Here, we will review the use of laboratory-based measures of fear processes in humans including cued and contextual conditioning, generalization, extinction, reconsolidation, and reinstatement to develop novel drug treatments for PTSD. We will primarily focus on recent advances in using behavioral and physiological measures of fear, discussing their sensitivity as biobehavioral markers of PTSD symptoms, their response to known and novel PTSD treatments, and in the case of d-cycloserine, how well these findings have translated to outcomes in clinical trials. We will highlight some gaps in the literature and needs for future research, discuss benefits and limitations of these outcome measures in designing proof-of-concept trials, and offer practical guidelines on design and interpretation when using these fear models for drug discovery.

  4. An appraisal of theoretical approaches to examining behaviours in relation to Human Papillomavirus (HPV) vaccination of young women

    PubMed Central

    Batista Ferrer, Harriet; Audrey, Suzanne; Trotter, Caroline; Hickman, Matthew

    2015-01-01

    Background Interventions to increase uptake of Human Papillomavirus (HPV) vaccination by young women may be more effective if they are underpinned by an appropriate theoretical model or framework. The aims of this review were: to describe the theoretical models or frameworks used to explain behaviours in relation to HPV vaccination of young women, and: to consider the appropriateness of the theoretical models or frameworks used for informing the development of interventions to increase uptake. Methods Primary studies were identified through a comprehensive search of databases from inception to December 2013. Results Thirty-four relevant studies were identified, of which 31 incorporated psychological health behaviour models or frameworks and three used socio-cultural models or theories. The primary studies used a variety of approaches to measure a diverse range of outcomes in relation to behaviours of professionals, parents, and young women. The majority appeared to use theory appropriately throughout. About half of the quantitative studies presented data in relation to goodness of fit tests and the proportion of the variability in the data. Conclusion Due to diverse approaches and inconsistent findings across studies, the current contribution of theory to understanding and promoting HPV vaccination uptake is difficult to assess. Ecological frameworks encourage the integration of individual and social approaches by encouraging exploration of the intrapersonal, interpersonal, organisational, community and policy levels when examining public health issues. Given the small number of studies using such approach, combined with the importance of these factors in predicting behaviour, more research in this area is warranted. PMID:26314783

  5. Theoretical modeling of the plasma-assisted catalytic growth and field emission properties of graphene sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Suresh C.; Gupta, Neha

    2015-12-15

    A theoretical modeling for the catalyst-assisted growth of graphene sheet in the presence of plasma has been investigated. It is observed that the plasma parameters can strongly affect the growth and field emission properties of graphene sheet. The model developed accounts for the charging rate of the graphene sheet; number density of electrons, ions, and neutral atoms; various elementary processes on the surface of the catalyst nanoparticle; surface diffusion and accretion of ions; and formation of carbon-clusters and large graphene islands. In our investigation, it is found that the thickness of the graphene sheet decreases with the plasma parameters, numbermore » density of hydrogen ions and RF power, and consequently, the field emission of electrons from the graphene sheet surface increases. The time evolution of the height of graphene sheet with ion density and sticking coefficient of carbon species has also been examined. Some of our theoretical results are in compliance with the experimental observations.« less

  6. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  7. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    PubMed

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  8. Quantitative model for the blood pressure-lowering interaction of valsartan and amlodipine.

    PubMed

    Heo, Young-A; Holford, Nick; Kim, Yukyung; Son, Mijeong; Park, Kyungsoo

    2016-12-01

    The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. A two-compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D 1  + D 2 ) +ALPHA×(D 1 × D 2 ). D 1 and D 2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were -0.171 (95% CI: -0.218, -0.143) for SBP and -0.0312 (95% CI: -0.07739, -0.00283) for DBP. These infra-additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra-additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. © 2016 The British Pharmacological Society.

  9. Quantitative model for the blood pressure‐lowering interaction of valsartan and amlodipine

    PubMed Central

    Heo, Young‐A; Holford, Nick; Kim, Yukyung; Son, Mijeong

    2016-01-01

    Aims The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. Methods PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. Results A two‐compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D1 + D2) +ALPHA×(D1 × D2). D1 and D2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were −0.171 (95% CI: −0.218, −0.143) for SBP and −0.0312 (95% CI: −0.07739, −0.00283) for DBP. These infra‐additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. Conclusion PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra‐additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. PMID:27504853

  10. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    PubMed

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  11. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  12. Electromagnetic braking: A simple quantitative model

    NASA Astrophysics Data System (ADS)

    Levin, Yan; da Silveira, Fernando L.; Rizzato, Felipe B.

    2006-09-01

    A calculation is presented that quantitatively accounts for the terminal velocity of a cylindrical magnet falling through a long copper or aluminum pipe. The experiment and the theory are a dramatic illustration of Faraday's and Lenz's laws.

  13. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.

  14. Prediction of the amount of urban waste solids by applying a gray theoretical model.

    PubMed

    Li, Xiao-Ming; Zeng, Guang-Ming; Wang, Ming; Liu, Jin-Jin

    2003-01-01

    Urban waste solids are now becoming one of the most crucial environmental problems. There are several different kinds of technologies normally used for waste solids disposal, among which landfill is more favorable in China than others, especially for urban waste solids. Most of the design works up to now are based on a roughly estimation of the amount of urban waste solids without any theoretical support, which lead to a series problems. To meet the basic information requirements for the design work, the amount of the urban waste solids was predicted in this research by applying the gray theoretical model GM (1,1) through non-linear differential equation simulation. The model parameters were estimated with the least square method (LSM) by running a certain MATALAB program, and the hypothesis test results show that the residual between the prediction value and the actual value approximately comply with the normal distribution N (0, 0.21(2)), and the probability of the residual within the range ( -0.17, 0.19) is more than 95%, which indicate obviously that the model can be well used for the prediction of the amount of waste solids and those had been already testified by the latest two years data about the urban waste solids from Loudi City of China. With this model, the predicted amount of the waste solids produced in Loudi City in the next 30 years is 8049000 ton in total.

  15. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  16. Theoretical modeling of PEB procedure on EUV resist using FDM formulation

    NASA Astrophysics Data System (ADS)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2018-03-01

    Semiconductor manufacturing industry has reduced the size of wafer for enhanced productivity and performance, and Extreme Ultraviolet (EUV) light source is considered as a promising solution for downsizing. A series of EUV lithography procedures contain complex photo-chemical reaction on photoresist, and it causes technical difficulties on constructing theoretical framework which facilitates rigorous investigation of underlying mechanism. Thus, we formulated finite difference method (FDM) model of post exposure bake (PEB) process on positive chemically amplified resist (CAR), and it involved acid diffusion coupled-deprotection reaction. The model is based on Fick's second law and first-order chemical reaction rate law for diffusion and deprotection, respectively. Two kinetic parameters, diffusion coefficient of acid and rate constant of deprotection, which were obtained by experiment and atomic scale simulation were applied to the model. As a result, we obtained time evolutional protecting ratio of each functional group in resist monomer which can be used to predict resulting polymer morphology after overall chemical reactions. This achievement will be the cornerstone of multiscale modeling which provides fundamental understanding on important factors for EUV performance and rational design of the next-generation photoresist.

  17. Within tree variation of lignin, extractives, and microfibril angle coupled with the theoretical and near infrared modeling of microfibril angle

    Treesearch

    Brian K. Via; chi L. So; Leslie H. Groom; Todd F. Shupe; michael Stine; Jan Wikaira

    2007-01-01

    A theoretical model was built predicting the relationship between microfibril angle and lignin content at the Angstrom (A) level. Both theoretical and statistical examination of experimental data supports a square root transformation of lignin to predict microfibril angle. The experimental material used came from 10 longleaf pine (Pinus palustris)...

  18. A unified theoretical framework for mapping models for the multi-state Hamiltonian.

    PubMed

    Liu, Jian

    2016-11-28

    We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.

  19. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  20. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  1. From information theory to quantitative description of steric effects.

    PubMed

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  2. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  3. Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo

    PubMed Central

    Dmitrieff, Serge; Rao, Madan; Sens, Pierre

    2013-01-01

    The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488

  4. Impact of Patient and Procedure Mix on Finances of Perinatal Centres – Theoretical Models for Economic Strategies in Perinatal Centres

    PubMed Central

    Hildebrandt, T.; Kraml, F.; Wagner, S.; Hack, C. C.; Thiel, F. C.; Kehl, S.; Winkler, M.; Frobenius, W.; Faschingbauer, F.; Beckmann, M. W.; Lux, M. P.

    2013-01-01

    Introduction: In Germany, cost and revenue structures of hospitals with defined treatment priorities are currently being discussed to identify uneconomic services. This discussion has also affected perinatal centres (PNCs) and represents a new economic challenge for PNCs. In addition to optimising the time spent in hospital, the hospital management needs to define the “best” patient mix based on costs and revenues. Method: Different theoretical models were proposed based on the cost and revenue structures of the University Perinatal Centre for Franconia (UPF). Multi-step marginal costing was then used to show the impact on operating profits of changes in services and bed occupancy rates. The current contribution margin accounting used by the UPF served as the basis for the calculations. The models demonstrated the impact of changes in services on costs and revenues of a level 1 PNC. Results: Contribution margin analysis was used to calculate profitable and unprofitable DRGs based on average inpatient cost per day. Nineteen theoretical models were created. The current direct costing used by the UPF and a theoretical model with a 100 % bed occupancy rate were used as reference models. Significantly higher operating profits could be achieved by doubling the number of profitable DRGs and halving the number of less profitable DRGs. Operating profits could be increased even more by changing the rates of profitable DRGs per bed occupancy. The exclusive specialisation on pathological and high-risk pregnancies resulted in operating losses. All models which increased the numbers of caesarean sections or focused exclusively on c-sections resulted in operating losses. Conclusion: These theoretical models offer a basis for economic planning. They illustrate the enormous impact potential changes can have on the operating profits of PNCs. Level 1 PNCs require high bed occupancy rates and a profitable patient mix to cover the extremely high costs incurred due to the services

  5. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    NASA Astrophysics Data System (ADS)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  6. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.

    PubMed

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-06-07

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.

  7. Testing theoretical models of magnetic damping using an air track

    NASA Astrophysics Data System (ADS)

    Vidaurre, Ana; Riera, Jaime; Monsoriu, Juan A.; Giménez, Marcos H.

    2008-03-01

    Magnetic braking is a long-established application of Lenz's law. A rigorous analysis of the laws governing this problem involves solving Maxwell's equations in a time-dependent situation. Approximate models have been developed to describe different experimental results related to this phenomenon. In this paper we present a new method for the analysis of magnetic braking using a magnet fixed to the glider of an air track. The forces acting on the glider, a result of the eddy currents, can be easily observed and measured. As a consequence of the air track inclination, the glider accelerates at the beginning, although it asymptotically tends towards a uniform rectilinear movement characterized by a terminal speed. This speed depends on the interaction between the magnetic field and the conductivity properties of the air track. Compared with previous related approaches, in our experimental setup the magnet fixed to the glider produces a magnetic braking force which acts continuously, rather than over a short period of time. The experimental results satisfactorily concur with the theoretical models adapted to this configuration.

  8. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway

    PubMed Central

    Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536

  9. A theoretical framework for constructing elastic/plastic constitutive models of triaxial tests

    NASA Astrophysics Data System (ADS)

    Collins, Ian F.; Hilder, Tamsyn

    2002-11-01

    Modern ideas of thermomechanics are used to develop families of models describing the elastic/plastic behaviour of cohesionless soils deforming under triaxial conditions. Once the form of the free energy and dissipation potential functions have been specified, the corresponding yield loci, flow rules, isotropic and kinematic hardening rules as well as the elasticity law are deduced in a systematic manner. The families contain the classical linear frictional (Coulomb type) models and the classical critical state models as special cases. The generalized models discussed here include non-associated flow rules, shear as well as volumetric hardening, anisotropic responses and rotational yield loci. The various parameters needed to describe the models can be interpreted in terms of ratio of the plastic work, which is dissipated, to that which is stored. Non-associated behaviour is found to occur whenever this division between dissipated and stored work is not equal. Micro-level interpretations of stored plastic work are discussed. The models automatically satisfy the laws of thermodynamics, and there is no need to invoke any stability postulates. Some classical forms of the peak-strength/dilatancy relationship are established theoretically. Some representative drained and undrained paths are computed.

  10. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    NASA Astrophysics Data System (ADS)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  11. Young children's core symbolic and nonsymbolic quantitative knowledge in the prediction of later mathematics achievement.

    PubMed

    Geary, David C; vanMarle, Kristy

    2016-12-01

    At the beginning of preschool (M = 46 months of age), 197 (94 boys) children were administered tasks that assessed a suite of nonsymbolic and symbolic quantitative competencies as well as their executive functions, verbal and nonverbal intelligence, preliteracy skills, and their parents' education level. The children's mathematics achievement was assessed at the end of preschool (M = 64 months). We used a series of Bayesian and standard regression analyses to winnow this broad set of competencies down to the core subset of quantitative skills that predict later mathematics achievement, controlling other factors. This knowledge included children's fluency in reciting the counting string, their understanding of the cardinal value of number words, and recognition of Arabic numerals, as well as their sensitivity to the relative quantity of 2 collections of objects. The results inform theoretical models of the foundations of children's early quantitative development and have practical implications for the design of early interventions for children at risk for poor long-term mathematics achievement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  13. Proof of concept of an artificial muscle: theoretical model, numerical model, and hardware experiment.

    PubMed

    Haeufle, D F B; Günther, M; Blickhan, R; Schmitt, S

    2011-01-01

    Recently, the hyperbolic Hill-type force-velocity relation was derived from basic physical components. It was shown that a contractile element CE consisting of a mechanical energy source (active element AE), a parallel damper element (PDE), and a serial element (SE) exhibits operating points with hyperbolic force-velocity dependency. In this paper, the contraction dynamics of this CE concept were analyzed in a numerical simulation of quick release experiments against different loads. A hyperbolic force-velocity relation was found. The results correspond to measurements of the contraction dynamics of a technical prototype. Deviations from the theoretical prediction could partly be explained by the low stiffness of the SE, which was modeled analog to the metal spring in the hardware prototype. The numerical model and hardware prototype together, are a proof of this CE concept and can be seen as a well-founded starting point for the development of Hill-type artificial muscles. This opens up new vistas for the technical realization of natural movements with rehabilitation devices. © 2011 IEEE

  14. The mathematical and theoretical biology institute--a model of mentorship through research.

    PubMed

    Camacho, Erika T; Kribs-Zaleta, Christopher M; Wirkus, Stephen

    2013-01-01

    This article details the history, logistical operations, and design philosophy of the Mathematical and Theoretical Biology Institute (MTBI), a nationally recognized research program with an 18-year history of mentoring researchers at every level from high school through university faculty, increasing the number of researchers from historically underrepresented minorities, and motivating them to pursue research careers by allowing them to work on problems of interest to them and supporting them in this endeavor. This mosaic profile highlights how MTBI provides a replicable multi-level model for research mentorship.

  15. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    PubMed

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  16. Theoretical model for design and analysis of protectional eyewear.

    PubMed

    Zelzer, B; Speck, A; Langenbucher, A; Eppig, T

    2013-05-01

    Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.

  17. Isotropic differential phase contrast microscopy for quantitative phase bio-imaging.

    PubMed

    Chen, Hsi-Hsun; Lin, Yu-Zi; Luo, Yuan

    2018-05-16

    Quantitative phase imaging (QPI) has been investigated to retrieve optical phase information of an object and applied to biological microscopy and related medical studies. In recent examples, differential phase contrast (DPC) microscopy can recover phase image of thin sample under multi-axis intensity measurements in wide-field scheme. Unlike conventional DPC, based on theoretical approach under partially coherent condition, we propose a new method to achieve isotropic differential phase contrast (iDPC) with high accuracy and stability for phase recovery in simple and high-speed fashion. The iDPC is simply implemented with a partially coherent microscopy and a programmable thin-film transistor (TFT) shield to digitally modulate structured illumination patterns for QPI. In this article, simulation results show consistency of our theoretical approach for iDPC under partial coherence. In addition, we further demonstrate experiments of quantitative phase images of a standard micro-lens array, as well as label-free live human cell samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A theoretical model for analysing gender bias in medicine

    PubMed Central

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-01-01

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers. PMID:19646289

  19. Tapered fiber optic applicator for laser ablation: Theoretical and experimental assessment of thermal effects on ex vivo model.

    PubMed

    Saccomandi, P; Di Matteo, F M; Schena, E; Quero, G; Massaroni, C; Giurazza, F; Costamagna, G; Silvestri, S

    2017-07-01

    Laser Ablation (LA) is a minimally invasive technique for tumor removal. The laser light is guided into the target tissue by a fiber optic applicator; thus the physical features of the applicator tip strongly influence size and shape of the tissue lesion. This study aims to verify the geometry of the lesion achieved by a tapered-tip applicator, and to investigate the percentage of thermally damaged cells induced by the tapered-tip fiber optic applicator. A theoretical model was implemented to simulate: i) the distribution of laser light fluence rate in the tissue through Monte Carlo method, ii) the induced temperature distribution, by means of the Bio Heat Equation, iii) the tissue injury, by Arrhenius integral. The results obtained by the implementation of the theoretical model were experimentally assessed. Ex vivo porcine liver underwent LA with tapered-tip applicator, at different laser settings (laser power of 1 W and 1.7 W, deposited energy equal to 330 J and 500 J, respectively). Almost spherical volume lesions were produced. The thermal damage was assessed by measuring the diameter of the circular-shaped lesion. The comparison between experimental results and theoretical prediction shows that the thermal damage discriminated by visual inspection always corresponds to a percentage of damaged cells of 96%. A tapered-tip applicator allows obtaining localized and reproducible damage close to spherical shape, whose diameter is related to the laser settings, and the simple theoretical model described is suitable to predict the effects, in terms of thermal damage, on ex vivo liver. Further trials should be addressed to adapt the model also on in vivo tissue, aiming to develop a tool useful to support the physician in clinical application of LA.

  20. Experimental verification of a theoretical model of an active cladding optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Albin, Sacharia; Briant, Alvin L.; Egalon, Claudio O.; Rogowski, Robert S.; Nankung, Juock S.

    1993-01-01

    Experiments were conducted to verify a theoretical model on the injection efficiency of sources in the cladding of an optical fiber. The theoretical results predicted an increase in the injection efficiency for higher differences in refractive indices between the core and cladding. The experimental apparatus used consisted of a glass rod 50 cm long, coated at one end with a thin film of fluorescent substance. The fluorescent substance was excited with side illumination, perpendicular to the rod axis, using a 476 nm Argon-ion laser. Part of the excited fluorescence was injected into the core and guided to a detector. The signal was measured for several different cladding refractive indices. The cladding consisted of sugar dissolved in water and the refractive index was changed by varying the sugar concentration in the solution. The results indicate that the power injected into the rod, due to evanescent wave injection, increases with the difference in refractive index which is in qualitative agreement with theory.

  1. Commentary on factors affecting transverse vibration using an idealized theoretical equation

    Treesearch

    Joseph F. Murphy

    2000-01-01

    An idealized theoretical equation to calculate flexural stiffness using transverse vibration of a simply end-supported beam is being considered by the American Society of Testing and Materials (ASTM) Wood Committee D07 to determine lumber modulus of elasticity. This commentary provides the user a quantitative view of six factors that affect the accuracy of using the...

  2. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  3. Optimal pacing strategy: from theoretical modelling to reality in 1500-m speed skating.

    PubMed

    Hettinga, F J; De Koning, J J; Schmidt, L J I; Wind, N A C; Macintosh, B R; Foster, C

    2011-01-01

    Athletes are trained to choose the pace which is perceived to be correct during a specific effort, such as the 1500-m speed skating competition. The purpose of the present study was to "override" self-paced (SP) performance by instructing athletes to execute a theoretically optimal pacing profile. Seven national-level speed-skaters performed a SP 1500-m which was analysed by obtaining velocity (every 100 m) and body position (every 200 m) with video to calculate total mechanical power output. Together with gross efficiency and aerobic kinetics, obtained in separate trials, data were used to calculate aerobic and anaerobic power output profiles. An energy flow model was applied to SP, simulating a range of pacing strategies, and a theoretically optimal pacing profile was imposed in a second race (IM). Final time for IM was ∼2 s slower than SP. Total power distribution per lap differed, with a higher power over the first 300 m for IM (637.0 (49.4) vs 612.5 (50.0) W). Anaerobic parameters did not differ. The faster first lap resulted in a higher aerodynamic drag coefficient and perhaps a less effective push-off. Experienced athletes have a well-developed performance template, and changing pacing strategy towards a theoretically optimal fast start protocol had negative consequences on speed-skating technique and did not result in better performance.

  4. Validation of theoretical models of intrinsic torque in DIII-D

    NASA Astrophysics Data System (ADS)

    Grierson, B. A.; Wang, W. X.; Battaglia, D. J.; Chrystal, C.; Solomon, W. M.; Degrassie, J. S.; Staebler, G. M.; Boedo, J. A.

    2016-10-01

    Plasma rotation experiments in DIII-D are validating models of main-ion intrinsic rotation by testing Reynolds stress induced toroidal flow in the plasma core and intrinsic rotation induced by ion orbit losses in the plasma edge. In the core of dominantly electron heated plasmas with Te=Ti, the main-ion intrinsic toroidal rotation undergoes a reversal that correlates with the critical gradient for ITG turbulence. Residual stress arising from zonal-flow ExB shear and turbulence intensity gradient produce residual stress and counter-current intrinsic torque, which is balanced by momentum diffusion, creating the hollow profile. Quantitative agreement is obtained for the first time between the measured main-ion toroidal rotation and the rotation profile predicted by nonlinear GTS gyrokinetic simulations. At the plasma boundary, new main-ion CER measurements show a co-current rotation layer and this is tested against ion orbit loss models as the source of bulk plasma rotation. Work supported by the US Department of Energy under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  5. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  6. Prefission Constriction of Golgi Tubular Carriers Driven by Local Lipid Metabolism: A Theoretical Model

    PubMed Central

    Shemesh, Tom; Luini, Alberto; Malhotra, Vivek; Burger, Koert N. J.; Kozlov, Michael M.

    2003-01-01

    Membrane transport within mammalian cells is mediated by small vesicular as well as large pleiomorphic transport carriers (TCs). A major step in the formation of TCs is the creation and subsequent narrowing of a membrane neck connecting the emerging carrier with the initial membrane. In the case of small vesicular TCs, neck formation may be directly induced by the coat proteins that cover the emerging vesicle. However, the mechanism underlying the creation and narrowing of a membrane neck in the generation of large TCs remains unknown. We present a theoretical model for neck formation based on the elastic model of membranes. Our calculations suggest a lipid-driven mechanism with a central role for diacylglycerol (DAG). The model is applied to a well-characterized in vitro system that reconstitutes TC formation from the Golgi complex, namely the pearling and fission of Golgi tubules induced by CtBP/BARS, a protein that catalyzes the conversion of lysophosphatidic acid into phosphatidic acid. In view of the importance of a PA-DAG cycle in the formation of Golgi TCs, we assume that the newly formed phosphatidic acid undergoes rapid dephosphorylation into DAG. DAG possesses a unique molecular shape characterized by an extremely large negative spontaneous curvature, and it redistributes rapidly between the membrane monolayers and along the membrane surface. Coupling between local membrane curvature and local lipid composition results, by mutual enhancement, in constrictions of the tubule into membrane necks, and a related inhomogeneous lateral partitioning of DAG. Our theoretical model predicts the exact dimensions of the constrictions observed in the pearling Golgi tubules. Moreover, the model is able to explain membrane neck formation by physiologically relevant mole fractions of DAG. PMID:14645071

  7. Quantitative gene-gene and gene-environment mapping for leaf shape variation using tree-based models.

    PubMed

    Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun

    2017-01-01

    Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  8. Theoretical investigation on the magnetic and electric properties in TbSb compound through an anisotropic microscopic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranke, P. J. von, E-mail: von.ranke@uol.com.br; Ribeiro, P. O.; Alho, B. P.

    2016-05-14

    We report the strong correlations between the magnetoresistivity and the magnetic entropy change in the cubic antiferromagnetic TbSb compound. The theoretical investigation was performed through a microscopic model which takes into account the crystalline electrical field anisotropy, exchange coupling interactions between the up and down magnetic sublattices, and the Zeeman interaction. The easy magnetization directions changes from 〈001〉 to 〈110〉 and then to 〈111〉 observed experimentally was successfully theoretically described. Also, the calculation of the temperature dependence of electric resistivity showed good agreement with the experimental data. Theoretical predictions were calculated for the temperature dependence of the magnetic entropy andmore » resistivity changes upon magnetic field variation. Besides, the difference in the spin up and down sublattices resistivity was investigated.« less

  9. Accurate experimental and theoretical comparisons between superconductor-insulator-superconductor mixers showing weak and strong quantum effects

    NASA Technical Reports Server (NTRS)

    Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.

    1988-01-01

    A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.

  10. Invasion emerges from cancer cell adaptation to competitive microenvironments: Quantitative predictions from multiscale mathematical models

    PubMed Central

    Rejniak, Katarzyna A.; Gerlee, Philip

    2013-01-01

    Summary In this review we summarize our recent efforts using mathematical modeling and computation to simulate cancer invasion, with a special emphasis on the tumor microenvironment. We consider cancer progression as a complex multiscale process and approach it with three single-cell based mathematical models that examine the interactions between tumor microenvironment and cancer cells at several scales. The models exploit distinct mathematical and computational techniques, yet they share core elements and can be compared and/or related to each other. The overall aim of using mathematical models is to uncover the fundamental mechanisms that lend cancer progression its direction towards invasion and metastasis. The models effectively simulate various modes of cancer cell adaptation to the microenvironment in a growing tumor. All three point to a general mechanism underlying cancer invasion: competition for adaptation between distinct cancer cell phenotypes, driven by a tumor microenvironment with scarce resources. These theoretical predictions pose an intriguing experimental challenge: test the hypothesis that invasion is an emergent property of cancer cell populations adapting to selective microenvironment pressure, rather than culmination of cancer progression producing cells with the “invasive phenotype”. In broader terms, we propose that fundamental insights into cancer can be achieved by experimentation interacting with theoretical frameworks provided by computational and mathematical modeling. PMID:18524624

  11. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  12. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  13. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  14. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  15. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19

  16. A theoretical reassessment of microbial maintenance and implications for microbial ecology modeling.

    PubMed

    Wang, Gangsheng; Post, Wilfred M

    2012-09-01

    We attempted to reconcile three microbial maintenance models (Herbert, Pirt, and Compromise) through a theoretical reassessment. We provided a rigorous proof that the true growth yield coefficient (Y(G)) is the ratio of the specific maintenance rate (a in Herbert) to the maintenance coefficient (m in Pirt). Other findings from this study include: (1) the Compromise model is identical to the Herbert for computing microbial growth and substrate consumption, but it expresses the dependence of maintenance on both microbial biomass and substrate; (2) the maximum specific growth rate in the Herbert (μ(max,H)) is higher than those in the other two models (μ(max,P) and μ(max,C)), and the difference is the physiological maintenance factor (m(q) = a); and (3) the overall maintenance coefficient (m(T)) is more sensitive to m(q) than to the specific growth rate (μ(G)) and Y(G). Our critical reassessment of microbial maintenance provides a new approach for quantifying some important components in soil microbial ecology models. © This article is a US government work and is in the public domain in the USA.

  17. Quantitative collision induced mass spectrometry of substituted piperazines - A correlative analysis between theory and experiment

    NASA Astrophysics Data System (ADS)

    Ivanova, Bojidarka; Spiteller, Michael

    2017-12-01

    The present paper deals with quantitative kinetics and thermodynamics of collision induced dissociation (CID) reactions of piperazines under different experimental conditions together with a systematic description of effect of counter-ions on common MS fragment reactions of piperazines; and intra-molecular effect of quaternary cyclization of substituted piperazines yielding to quaternary salts. There are discussed quantitative model equations of rate constants as well as free Gibbs energies of series of m-independent CID fragment processes in GP, which have been evidenced experimentally. Both kinetic and thermodynamic parameters are also predicted by computational density functional theory (DFT) and ab initio both static and dynamic methods. The paper examines validity of Maxwell-Boltzmann distribution to non-Boltzmann CID processes in quantitatively as well. The experiments conducted within the latter framework yield to an excellent correspondence with theoretical quantum chemical modeling. The important property of presented model equations of reaction kinetics is the applicability in predicting unknown and assigning of known mass spectrometric (MS) patterns. The nature of "GP" continuum of CID-MS coupled scheme of measurements with electrospray ionization (ESI) source is discussed, performing parallel computations in gas-phase (GP) and polar continuum at different temperatures and ionic strengths. The effect of pressure is presented. The study contributes significantly to methodological and phenomenological developments of CID-MS and its analytical implementations for quantitative and structural analyses. It also demonstrates great prospective of a complementary application of experimental CID-MS and computational quantum chemistry studying chemical reactivity, among others. To a considerable extend this work underlies the place of computational quantum chemistry to the field of experimental analytical chemistry in particular highlighting the structural analysis.

  18. Standard model with a complex scalar singlet: Cosmological implications and theoretical considerations

    NASA Astrophysics Data System (ADS)

    Chiang, Cheng-Wei; Ramsey-Musolf, Michael J.; Senaha, Eibun

    2018-01-01

    We analyze the theoretical and phenomenological considerations for the electroweak phase transition and dark matter in an extension of the standard model with a complex scalar singlet (cxSM). In contrast with earlier studies, we use a renormalization group improved scalar potential and treat its thermal history in a gauge-invariant manner. We find that the parameter space consistent with a strong first-order electroweak phase transition (SFOEWPT) and present dark matter phenomenological constraints is significantly restricted compared to results of a conventional, gauge-noninvariant analysis. In the simplest variant of the cxSM, recent LUX data and a SFOEWPT require a dark matter mass close to half the mass of the standard model-like Higgs boson. We also comment on various caveats regarding the perturbative treatment of the phase transition dynamics.

  19. Characterizing the In-Phase Reflection Bandwidth Theoretical Limit of Artificial Magnetic Conductors With a Transmission Line Model

    NASA Technical Reports Server (NTRS)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeefrey D.; Simons, Rainee N.; Xiao, John Q.

    2013-01-01

    We validate through simulation and experiment that artificial magnetic conductors (AMC s) can be well characterized by a transmission line model. The theoretical bandwidth limit of the in-phase reflection can be expressed in terms of the effective RLC parameters from the surface patch and the properties of the substrate. It is found that the existence of effective inductive components will reduce the in-phase reflection bandwidth of the AMC. Furthermore, we propose design strategies to optimize AMC structures with an in-phase reflection bandwidth closer to the theoretical limit.

  20. Toward Validation of the Genius Discipline-Specific Literacy Model

    ERIC Educational Resources Information Center

    Ellis, Edwin S.; Wills, Stephen; Deshler, Donald D.

    2011-01-01

    An analysis of the rationale and theoretical foundations of the Genius Discipline-specific Literacy Model and its use of SMARTvisuals to cue information-processing skills and strategies and focus attention on essential informational elements in high-frequency topics in history and the English language arts are presented. Quantitative data…