Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS
NASA Astrophysics Data System (ADS)
Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.
2017-04-01
To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.
Matter Gravitates, but Does Gravity Matter?
ERIC Educational Resources Information Center
Groetsch, C. W.
2011-01-01
The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…
Mathematical Rigor vs. Conceptual Change: Some Early Results
NASA Astrophysics Data System (ADS)
Alexander, W. R.
2003-05-01
Results from two different pedagogical approaches to teaching introductory astronomy at the college level will be presented. The first of these approaches is a descriptive, conceptually based approach that emphasizes conceptual change. This descriptive class is typically an elective for non-science majors. The other approach is a mathematically rigorous treatment that emphasizes problem solving and is designed to prepare students for further study in astronomy. The mathematically rigorous class is typically taken by science majors. It also fulfills an elective science requirement for these science majors. The Astronomy Diagnostic Test version 2 (ADT 2.0) was used as an assessment instrument since the validity and reliability have been investigated by previous researchers. The ADT 2.0 was administered as both a pre-test and post-test to both groups. Initial results show no significant difference between the two groups in the post-test. However, there is a slightly greater improvement for the descriptive class between the pre and post testing compared to the mathematically rigorous course. There was great care to account for variables. These variables included: selection of text, class format as well as instructor differences. Results indicate that the mathematically rigorous model, doesn't improve conceptual understanding any better than the conceptual change model. Additional results indicate that there is a similar gender bias in favor of males that has been measured by previous investigators. This research has been funded by the College of Science and Mathematics at James Madison University.
A Mathematical Evaluation of the Core Conductor Model
Clark, John; Plonsey, Robert
1966-01-01
This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155
Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors
ERIC Educational Resources Information Center
Brogt, Erik; Draeger, John D.
2015-01-01
We discuss a model of academic rigor and apply this to a general education introductory astronomy course. We argue that even without central tenets of professional astronomy-the use of mathematics--the course can still be considered academically rigorous when expectations, goals, assessments, and curriculum are properly aligned.
NASA Technical Reports Server (NTRS)
Tanveer, S.; Foster, M. R.
2002-01-01
We report progress in three areas of investigation related to dendritic crystal growth. Those items include: 1. Selection of tip features dendritic crystal growth; 2) Investigation of nonlinear evolution for two-sided model; and 3) Rigorous mathematical justification.
ERIC Educational Resources Information Center
Petrilli, Salvatore John, Jr.
2009-01-01
Historians of mathematics considered the nineteenth century to be the Golden Age of mathematics. During this time period many areas of mathematics, such as algebra and geometry, were being placed on rigorous foundations. Another area of mathematics which experienced fundamental change was analysis. The drive for rigor in calculus began in 1797…
NASA Astrophysics Data System (ADS)
Hamid, H.
2018-01-01
The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-03-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.
NASA Astrophysics Data System (ADS)
Parumasur, N.; Willie, R.
2008-09-01
We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
ERIC Educational Resources Information Center
Utah State Office of Education, 2011
2011-01-01
Utah has adopted more rigorous mathematics standards known as the Utah Mathematics Core Standards. They are the foundation of the mathematics curriculum for the State of Utah. The standards include the skills and understanding students need to succeed in college and careers. They include rigorous content and application of knowledge and reflect…
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
The Menu for Every Young Mathematician's Appetite
ERIC Educational Resources Information Center
Legnard, Danielle S.; Austin, Susan L.
2012-01-01
Math Workshop offers differentiated instruction to foster a deep understanding of rich, rigorous mathematics that is attainable by all learners. The inquiry-based model provides a menu of multilevel math tasks, within the daily math block, that focus on similar mathematical content. Math Workshop promotes a culture of engagement and…
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
ERIC Educational Resources Information Center
Easey, Michael
2013-01-01
This paper explores the decline in boys' participation in post-compulsory rigorous mathematics using the perspectives of eight experienced teachers at an independent, boys' College located in Brisbane, Queensland. This study coincides with concerns regarding the decline in suitably qualified tertiary graduates with requisite mathematical skills…
Discrete structures in continuum descriptions of defective crystals
2016-01-01
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of ‘plastic strain variables’, which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. PMID:27002070
Student’s rigorous mathematical thinking based on cognitive style
NASA Astrophysics Data System (ADS)
Fitriyani, H.; Khasanah, U.
2017-12-01
The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.
ERIC Educational Resources Information Center
Sworder, Steven C.
2007-01-01
An experimental two-track intermediate algebra course was offered at Saddleback College, Mission Viejo, CA, between the Fall, 2002 and Fall, 2005 semesters. One track was modeled after the existing traditional California community college intermediate algebra course and the other track was a less rigorous intermediate algebra course in which the…
Discrete structures in continuum descriptions of defective crystals.
Parry, G P
2016-04-28
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of 'plastic strain variables', which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. © 2016 The Author(s).
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
The KP Approximation Under a Weak Coriolis Forcing
NASA Astrophysics Data System (ADS)
Melinand, Benjamin
2018-02-01
In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.
Mathematics interventions for children and adolescents with Down syndrome: a research synthesis.
Lemons, C J; Powell, S R; King, S A; Davidson, K A
2015-08-01
Many children and adolescents with Down syndrome fail to achieve proficiency in mathematics. Researchers have suggested that tailoring interventions based on the behavioural phenotype may enhance efficacy. The research questions that guided this review were (1) what types of mathematics interventions have been empirically evaluated with children and adolescents with Down syndrome?; (2) do the studies demonstrate sufficient methodological rigor?; (3) is there evidence of efficacy for the evaluated mathematics interventions?; and (4) to what extent have researchers considered aspects of the behavioural phenotype in selecting, designing and/or implementing mathematics interventions for children and adolescents with Down syndrome? Nine studies published between 1989 and 2012 were identified for inclusion. Interventions predominantly focused on early mathematics skills and reported positive outcomes. However, no study met criteria for methodological rigor. Further, no authors explicitly considered the behavioural phenotype. Additional research using rigorous experimental designs is needed to evaluate the efficacy of mathematics interventions for children and adolescents with Down syndrome. Suggestions for considering the behavioural phenotype in future research are provided. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
Manpower Substitution and Productivity in Medical Practice
Reinhardt, Uwe E.
1973-01-01
Probably in response to the often alleged physician shortage in this country, concerted research efforts are under way to identify technically feasible opportunities for manpower substitution in the production of ambulatory health care. The approaches range from descriptive studies of the effect of task delegation on output of medical services to rigorous mathematical modeling of health care production by means of linear or continuous production functions. In this article the distinct methodological approaches underlying mathematical models are presented in synopsis, and their inherent strengths and weaknesses are contrasted. The discussion includes suggestions for future research directions. Images Fig. 2 PMID:4586735
A Constructive Response to "Where Mathematics Comes From."
ERIC Educational Resources Information Center
Schiralli, Martin; Sinclair, Nathalie
2003-01-01
Reviews the Lakoff and Nunez's book, "Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (2000)," which provided many mathematics education researchers with a novel and startling perspective on mathematical thinking. Suggests that several of the book's flaws can be addressed through a more rigorous establishment of…
2003-09-29
NanoTechnology and Metallurgy Belgrade 11000 Yugoslavia 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR/MONITOR’S ACRONYM(S)9...outlet annular tube I - ZONE I II - ZONE II 39 References: 1. Tayo Kaken Company, A means of reactivating worked charcoal , Japanese
A Transformative Model for Undergraduate Quantitative Biology Education
ERIC Educational Resources Information Center
Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
The role of a posteriori mathematics in physics
NASA Astrophysics Data System (ADS)
MacKinnon, Edward
2018-05-01
The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yurkin, Maxim A.
2017-01-01
Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.
A Center of Excellence in the Mathematical Sciences - at Cornell University
1992-03-01
of my recent efforts go in two directions. 1. Cellular Automata. The Greenberg Hastings model is a simple system that models the behavior of an... Greenberg -Hastings Model. We also obtained results concerning the crucial value for a threshold voter model. This resulted in the papers "Some Rigorous...Results for the Greenberg - Hastings Model" and "Fixation Results for Threshold Voter Systems." Together with Scot Adams, I wrote "An Application of the
Rigorous Science: a How-To Guide.
Casadevall, Arturo; Fang, Ferric C
2016-11-08
Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word "rigor" is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. Copyright © 2016 Casadevall and Fang.
David crighton, 1942-2000: a commentary on his career and his influence on aeroacoustic theory
NASA Astrophysics Data System (ADS)
Ffowcs Williams, John E.
David Crighton, a greatly admired figure in fluid mechanics, Head of the Department of Applied Mathematics and Theoretical Physics at Cambridge, and Master of Jesus College, Cambridge, died at the peak of his career. He had made important contributions to the theory of waves generated by unsteady flow. Crighton's work was always characterized by the application of rigorous mathematical approximations to fluid mechanical idealizations of practically relevant problems. At the time of his death, he was certainly the most influential British applied mathematical figure, and his former collaborators and students form a strong school that continues his special style of mathematical application. Rigorous analysis of well-posed aeroacoustical problems was transformed by David Crighton.
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
NASA Technical Reports Server (NTRS)
Thomas-Keprta, Kathie L.; Clemett, Simon J.; Bazylinski, Dennis A.; Kirschvink, Joseph L.; McKay, David S.; Wentworth, Susan J.; Vali, H.; Gibson, Everett K.
2000-01-01
Here we use rigorous mathematical modeling to compare ALH84001 prismatic magnetites with those produced by terrestrial magnetotactic bacteria, MV-1. We find that this subset of the Martian magnetites appears to be statistically indistinguishable from those of MV-1.
2006-12-01
DISTRIBUTION STATEMENT. ________//signature//________________ ________//signature//________________ PATRICK D. SULLIVAN, Ph.D., P.E. SANDRA R ...adsorber, at r =1.24 cm: (a) gas phase; (b) solid phase..................................................................................... 30 46 The...34 57 Axial profiles of the gas velocity during adsorption in the 2-cartridge adsorber at r =1.25cm..... 34 60
What We Do: A Multiple Case Study from Mathematics Coaches' Perspectives
ERIC Educational Resources Information Center
Kane, Barbara Ann
2013-01-01
Teachers face new challenges when they teach a more rigorous mathematics curriculum than one to which they are accustomed. The rationale for this particular study originated from watching teachers struggle with understanding mathematical content and pedagogical practices. Mathematics coaches can address teachers' concerns through sustained,…
NASA Astrophysics Data System (ADS)
Hidayat, D.; Nurlaelah, E.; Dahlan, J. A.
2017-09-01
The ability of mathematical creative and critical thinking are two abilities that need to be developed in the learning of mathematics. Therefore, efforts need to be made in the design of learning that is capable of developing both capabilities. The purpose of this research is to examine the mathematical creative and critical thinking ability of students who get rigorous mathematical thinking (RMT) approach and students who get expository approach. This research was quasi experiment with control group pretest-posttest design. The population were all of students grade 11th in one of the senior high school in Bandung. The result showed that: the achievement of mathematical creative and critical thinking abilities of student who obtain RMT is better than students who obtain expository approach. The use of Psychological tools and mediation with criteria of intentionality, reciprocity, and mediated of meaning on RMT helps students in developing condition in critical and creative processes. This achievement contributes to the development of integrated learning design on students’ critical and creative thinking processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
NASA Astrophysics Data System (ADS)
Šprlák, M.; Han, S.-C.; Featherstone, W. E.
2017-12-01
Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.
ERIC Educational Resources Information Center
Jackson, Christa; Jong, Cindy
2017-01-01
Teaching mathematics for equity is critical because it provides opportunities for all students, especially those who have been traditionally marginalised, to learn mathematics that is rigorous and relevant to their lives. This article reports on our work, as mathematics teacher educators, on exposing and engaging 60 elementary preservice teachers…
Mathematical methods for protein science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.; Istrail, S.; Atkins, J.
1997-12-31
Understanding the structure and function of proteins is a fundamental endeavor in molecular biology. Currently, over 100,000 protein sequences have been determined by experimental methods. The three dimensional structure of the protein determines its function, but there are currently less than 4,000 structures known to atomic resolution. Accordingly, techniques to predict protein structure from sequence have an important role in aiding the understanding of the Genome and the effects of mutations in genetic disease. The authors describe current efforts at Sandia to better understand the structure of proteins through rigorous mathematical analyses of simple lattice models. The efforts have focusedmore » on two aspects of protein science: mathematical structure prediction, and inverse protein folding.« less
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
ERIC Educational Resources Information Center
Stone, James R., III; Alfeld, Corinne; Pearson, Donna
2008-01-01
Numerous high school students, including many who are enrolled in career and technical education (CTE) courses, do not have the math skills necessary for today's high-skill workplace or college entrance requirements. This study tests a model for enhancing mathematics instruction in five high school CTE programs (agriculture, auto technology,…
Integrated model development for liquid fueled rocket propulsion systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
As detailed in the original statement of work, the objective of phase two of this research effort was to develop a general framework for rocket engine performance prediction that integrates physical principles, a rigorous mathematical formalism, component level test data, system level test data, and theory-observation reconciliation. Specific phase two development tasks are defined.
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
NASA Astrophysics Data System (ADS)
Bovier, Anton
2006-06-01
Our mathematical understanding of the statistical mechanics of disordered systems is going through a period of stunning progress. This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, recent progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail. Comprehensive introduction to an active and fascinating area of research Clear exposition that builds to the state of the art in the mathematics of spin glasses Written by a well-known and active researcher in the field
The impact of rigorous mathematical thinking as learning method toward geometry understanding
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-05-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding. RMT is a unique realization of the cognitive conceptual construction approach based on Mediated Learning Experience (MLE) theory by Feurstein and Vygotsky’s sociocultural theory. This was quasi experimental research which was comparing the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning method toward conceptual understanding of Junior High School students. The data was analyzed by using Independent t-test and obtained a significant difference of mean value between experimental and control class on geometry conceptual understanding. Further, by semi-structure interview known that students taught by RMT had deeper conceptual understanding than students who were taught by conventional way. By these result known that Rigorous Mathematical Thinking (RMT) as learning method have positive impact toward Geometry conceptual understanding.
Secondary School Advanced Mathematics, Chapter 3, Formal Geometry. Student's Text.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This text is the second of five in the Secondary School Advanced Mathematics (SSAM) series which was designed to meet the needs of students who have completed the Secondary School Mathematics (SSM) program, and wish to continue their study of mathematics. This volume is devoted to a rigorous development of theorems in plane geometry from 22…
ERIC Educational Resources Information Center
Chard, David J.; Baker, Scott K.; Clarke, Ben; Jungjohann, Kathleen; Davis, Karen; Smolkowski, Keith
2008-01-01
Concern about poor mathematics achievement in U.S. schools has increased in recent years. In part, poor achievement may be attributed to a lack of attention to early instruction and missed opportunities to build on young children's early understanding of mathematics. This study examined the development and feasibility testing of a kindergarten…
ERIC Educational Resources Information Center
Gersten, Russell
2016-01-01
In this commentary, the author reflects on four studies that have greatly expanded the knowledge base on effective interventions in mathematics, and he provides four rigorous experimental studies of approaches for students likely to experience difficulties learning mathematics over a large grade-level span (pre-K to 4th grade). All of the…
ERIC Educational Resources Information Center
Seeley, Cathy
2004-01-01
This article addresses some important issues in mathematics instruction at the middle and secondary levels, including the structuring of a district's mathematics program; the choice of textbooks and use of calculators in the classroom; the need for more rigorous lesson planning practices; and the dangers of teaching to standardized tests rather…
Advanced Mathematical Thinking
ERIC Educational Resources Information Center
Dubinsky, Ed; McDonald, Michael A.; Edwards, Barbara S.
2005-01-01
In this article we propose the following definition for advanced mathematical thinking: Thinking that requires deductive and rigorous reasoning about mathematical notions that are not entirely accessible to us through our five senses. We argue that this definition is not necessarily tied to a particular kind of educational experience; nor is it…
Rigorous Science: a How-To Guide
Fang, Ferric C.
2016-01-01
ABSTRACT Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. PMID:27834205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J.D.
1994-08-04
This report is divided into two parts. The second part is divided into the following sections: experimental protocol; modeling the hollow fiber extractor using film theory; Graetz model of the hollow fiber membrane process; fundamental diffusive-kinetic model; and diffusive liquid membrane device-a rigorous model. The first part is divided into: membrane and membrane process-a concept; metal extraction; kinetics of metal extraction; modeling the membrane contactor; and interfacial phenomenon-boundary conditions-applied to membrane transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L
Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less
Results of the Salish Projects: Summary and Implications for Science Teacher Education
ERIC Educational Resources Information Center
Yager, Robert E.; Simmons, Patricia
2013-01-01
Science teaching and teacher education in the U.S.A. have been of great national interest recently due to a severe shortage of science (and mathematics) teachers who do not hold strong qualifications in their fields of study. Unfortunately we lack a rigorous research base that helps inform solid practices about various models or elements of…
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
Teaching Mathematics to Civil Engineers
ERIC Educational Resources Information Center
Sharp, J. J.; Moore, E.
1977-01-01
This paper outlines a technique for teaching a rigorous course in calculus and differential equations which stresses applicability of the mathematics to problems in civil engineering. The method involves integration of subject matter and team teaching. (SD)
Nonlinear analysis of a model of vascular tumour growth and treatment
NASA Astrophysics Data System (ADS)
Tao, Youshan; Yoshida, Norio; Guo, Qian
2004-05-01
We consider a mathematical model describing the evolution of a vascular tumour in response to traditional chemotherapy. The model is a free boundary problem for a system of partial differential equations governing intratumoural drug concentration, cancer cell density and blood vessel density. Tumour cells consist of two types of competitive cells that have different proliferation rates and different sensitivities to drugs. The balance between cell proliferation and death generates a velocity field that drives tumour cell movement. The tumour surface is a moving boundary. The purpose of this paper is to establish a rigorous mathematical analysis of the model for studying the dynamics of intratumoural blood vessels and to explore drug dosage for the successful treatment of a tumour. We also study numerically the competitive effects of the two cell types on tumour growth.
ERIC Educational Resources Information Center
Cobbs, Joyce Bernice
2014-01-01
The literature on minority student achievement indicates that Black students are underrepresented in advanced mathematics courses. Advanced mathematics courses offer students the opportunity to engage with challenging curricula, experience rigorous instruction, and interact with quality teachers. The middle school years are particularly…
Community College Pathways: A Descriptive Report of Summative Assessments and Student Learning
ERIC Educational Resources Information Center
Strother, Scott; Sowers, Nicole
2014-01-01
Carnegie's Community College Pathways (CCP) offers two pathways, Statway® and Quantway®, that reduce the amount of time required to complete developmental mathematics and earn college-level mathematics credit. The Pathways aim to improve student success in mathematics while maintaining rigorous content, pedagogy, and learning outcomes. It is…
Teacher Efficacy of High School Mathematics Co-Teachers
ERIC Educational Resources Information Center
Rimpola, Raquel C.
2011-01-01
High school mathematics inclusion classes help provide all students the access to rigorous curriculum. This study provides information about the teacher efficacy of high school mathematics co-teachers. It considers the influence of the amount of collaborative planning time on the efficacy of co-teachers. A quantitative research design was used,…
Mathematical Rigor in the Common Core
ERIC Educational Resources Information Center
Hull, Ted H.; Balka, Don S.; Miles, Ruth Harbin
2013-01-01
A whirlwind of activity surrounds the topic of teaching and learning mathematics. The driving forces are a combination of changes in assessment and advances in technology that are being spurred on by the introduction of content in the Common Core State Standards for Mathematical Practice. Although the issues are certainly complex, the same forces…
Scaling Limit for a Generalization of the Nelson Model and its Application to Nuclear Physics
NASA Astrophysics Data System (ADS)
Suzuki, Akito
We study a mathematically rigorous derivation of a quantum mechanical Hamiltonian in a general framework. We derive such a Hamiltonian by taking a scaling limit for a generalization of the Nelson model, which is an abstract interaction model between particles and a Bose field with some internal degrees of freedom. Applying it to a model for the field of the nuclear force with isospins, we obtain a Schrödinger Hamiltonian with a matrix-valued potential, the one pion exchange potential, describing an effective interaction between nucleons.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Complex dynamics of an SEIR epidemic model with saturated incidence rate and treatment
NASA Astrophysics Data System (ADS)
Khan, Muhammad Altaf; Khan, Yasir; Islam, Saeed
2018-03-01
In this paper, we describe the dynamics of an SEIR epidemic model with saturated incidence, treatment function, and optimal control. Rigorous mathematical results have been established for the model. The stability analysis of the model is investigated and found that the model is locally asymptotically stable when R0 < 1. The model is locally as well as globally asymptotically stable at endemic equilibrium when R0 > 1. The proposed model may possess a backward bifurcation. The optimal control problem is designed and obtained their necessary results. Numerical results have been presented for justification of theoretical results.
STEM Pathways: Examining Persistence in Rigorous Math and Science Course Taking
NASA Astrophysics Data System (ADS)
Ashford, Shetay N.; Lanehart, Rheta E.; Kersaint, Gladis K.; Lee, Reginald S.; Kromrey, Jeffrey D.
2016-12-01
From 2006 to 2012, Florida Statute §1003.4156 required middle school students to complete electronic personal education planners (ePEPs) before promotion to ninth grade. The ePEP helped them identify programs of study and required high school coursework to accomplish their postsecondary education and career goals. During the same period Florida required completion of the ePEP, Florida's Career and Professional Education Act stimulated a rapid increase in the number of statewide high school career academies. Students with interests in STEM careers created STEM-focused ePEPs and may have enrolled in STEM career academies, which offered a unique opportunity to improve their preparedness for the STEM workforce through the integration of rigorous academic and career and technical education courses. This study examined persistence of STEM-interested (i.e., those with expressed interest in STEM careers) and STEM-capable (i.e., those who completed at least Algebra 1 in eighth grade) students ( n = 11,248), including those enrolled in STEM career academies, in rigorous mathematics and science course taking in Florida public high schools in comparison with the national cohort of STEM-interested students to measure the influence of K-12 STEM education efforts in Florida. With the exception of multi-race students, we found that Florida's STEM-capable students had lower persistence in rigorous mathematics and science course taking than students in the national cohort from ninth to eleventh grade. We also found that participation in STEM career academies did not support persistence in rigorous mathematics and science courses, a prerequisite for success in postsecondary STEM education and careers.
A Rigorous Treatment of Energy Extraction from a Rotating Black Hole
NASA Astrophysics Data System (ADS)
Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.
2009-05-01
The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.
ERIC Educational Resources Information Center
Jitendra, Asha K.; Petersen-Brown, Shawna; Lein, Amy E.; Zaslofsky, Anne F.; Kunkel, Amy K.; Jung, Pyung-Gang; Egan, Andrea M.
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et…
ERIC Educational Resources Information Center
Jehopio, Peter J.; Wesonga, Ronald
2017-01-01
Background: The main objective of the study was to examine the relevance of engineering mathematics to the emerging industries. The level of abstraction, the standard of rigor, and the depth of theoretical treatment are necessary skills expected of a graduate engineering technician to be derived from mathematical knowledge. The question of whether…
Linking Literacy and Mathematics: The Support for Common Core Standards for Mathematical Practice
ERIC Educational Resources Information Center
Swanson, Mary; Parrott, Martha
2013-01-01
In a new era of Common Core State Standards (CCSS), teachers are expected to provide more rigorous, coherent, and focused curriculum at every grade level. To respond to the call for higher expectations across the curriculum and certainly within reading, writing, and mathematics, educators should work closely together to create mathematically…
An Informal History of Formal Proofs: From Vigor to Rigor?
ERIC Educational Resources Information Center
Galda, Klaus
1981-01-01
The history of formal mathematical proofs is sketched out, starting with the Greeks. Included in this document is a chronological guide to mathematics and the world, highlighting major events in the world and important mathematicians in corresponding times. (MP)
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Are computational models of any use to psychiatry?
Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan
2011-08-01
Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
A Transformative Model for Undergraduate Quantitative Biology Education
Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949
Dividing by Zero: Exploring Null Results in a Mathematics Professional Development Program
ERIC Educational Resources Information Center
Hill, Heather C.; Corey, Douglas Lyman; Jacob, Robin T.
2018-01-01
Background/Context: Since 2002, U.S. federal funding for educational research has favored the development and rigorous testing of interventions designed to improve student outcomes. However, recent reviews suggest that a large fraction of the programs developed and rigorously tested in the past decade have shown null results on student outcomes…
Dóka, Éva; Lente, Gábor
2017-04-13
This work presents a rigorous mathematical study of the effect of unavoidable inhomogeneities in laser flash photolysis experiments. There are two different kinds of inhomegenities: the first arises from diffusion, whereas the second one has geometric origins (the shapes of the excitation and detection light beams). Both of these are taken into account in our reported model, which gives rise to a set of reaction-diffusion type partial differential equations. These equations are solved by a specially developed finite volume method. As an example, the aqueous reaction between the sulfate ion radical and iodide ion is used, for which sufficiently detailed experimental data are available from an earlier publication. The results showed that diffusion itself is in general too slow to influence the kinetic curves on the usual time scales of laser flash photolysis experiments. However, the use of the absorbances measured (e.g., to calculate the molar absorption coefficients of transient species) requires very detailed mathematical consideration and full knowledge of the geometrical shapes of the excitation laser beam and the separate detection light beam. It is also noted that the usual pseudo-first-order approach to evaluating the kinetic traces can be used successfully even if the usual large excess condition is not rigorously met in the reaction cell locally.
Underprepared Students' Performance on Algebra in a Double-Period High School Mathematics Program
ERIC Educational Resources Information Center
Martinez, Mara V.; Bragelman, John; Stoelinga, Timothy
2016-01-01
The primary goal of the Intensified Algebra I (IA) program is to enable mathematically underprepared students to successfully complete Algebra I in 9th grade and stay on track to meet increasingly rigorous high school mathematics graduation requirements. The program was designed to bring a range of both cognitive and non-cognitive supports to bear…
Investigation of possible observable e ects in a proposed theory of physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freidan, Daniel
2015-03-31
The work supported by this grant produced rigorous mathematical results on what is possible in quantum field theory. Quantum field theory is the well-established mathematical language for fundamental particle physics, for critical phenomena in condensed matter physics, and for Physical Mathematics (the numerous branches of Mathematics that have benefitted from ideas, constructions, and conjectures imported from Theoretical Physics). Proving rigorous constraints on what is possible in quantum field theories thus guides the field, puts actual constraints on what is physically possible in physical or mathematical systems described by quantum field theories, and saves the community the effort of trying tomore » do what is proved impossible. Results were obtained in two dimensional qft (describing, e.g., quantum circuits) and in higher dimensional qft. Rigorous bounds were derived on basic quantities in 2d conformal field theories, i.e., in 2d critical phenomena. Conformal field theories are the basic objects in quantum field theory, the scale invariant theories describing renormalization group fixed points from which all qfts flow. The first known lower bounds on the 2d boundary entropy were found. This is the entropy- information content- in junctions in critical quantum circuits. For dimensions d > 2, a no-go theorem was proved on the possibilities of Cauchy fields, which are the analogs of the holomorphic fields in d = 2 dimensions, which have had enormously useful applications in Physics and Mathematics over the last four decades. This closed o the possibility of finding analogously rich theories in dimensions above 2. The work of two postdoctoral research fellows was partially supported by this grant. Both have gone on to tenure track positions.« less
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less
NASA Astrophysics Data System (ADS)
Blanchard, Philippe; Hellmich, Mario; Ługiewicz, Piotr; Olkiewicz, Robert
Quantum mechanics is the greatest revision of our conception of the character of the physical world since Newton. Consequently, David Hilbert was very interested in quantum mechanics. He and John von Neumann discussed it frequently during von Neumann's residence in Göttingen. He published in 1932 his book Mathematical Foundations of Quantum Mechanics. In Hilbert's opinion it was the first exposition of quantum mechanics in a mathematically rigorous way. The pioneers of quantum mechanics, Heisenberg and Dirac, neither had use for rigorous mathematics nor much interest in it. Conceptually, quantum theory as developed by Bohr and Heisenberg is based on the positivism of Mach as it describes only observable quantities. It first emerged as a result of experimental data in the form of statistical observations of quantum noise, the basic concept of quantum probability.
Bipotential continuum models for granular mechanics
NASA Astrophysics Data System (ADS)
Goddard, Joe
2014-03-01
Most currently popular continuum models for granular media are special cases of a generalized Maxwell fluid model, which describes the evolution of stress and internal variables such as granular particle fraction and fabric,in terms of imposed strain rate. It is shown how such models can be obtained from two scalar potentials, a standard elastic free energy and a ``dissipation potential'' given rigorously by the mathematical theory of Edelen. This allows for a relatively easy derivation of properly invariant continuum models for granular media and fluid-particle suspensions within a thermodynamically consistent framework. The resulting continuum models encompass all the prominent regimes of granular flow, ranging from the quasi-static to rapidly sheared, and are readily extended to include higher-gradient or Cosserat effects. Models involving stress diffusion, such as that proposed recently by Kamrin and Koval (PRL 108 178301), provide an alternative approach that is mentioned in passing. This paper provides a brief overview of a forthcoming review articles by the speaker (The Princeton Companion to Applied Mathematics, and Appl. Mech. Rev.,in the press, 2013).
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
Shear-induced opening of the coronal magnetic field
NASA Technical Reports Server (NTRS)
Wolfson, Richard
1995-01-01
This work describes the evolution of a model solar corona in response to motions of the footpoints of its magnetic field. The mathematics involved is semianalytic, with the only numerical solution being that of an ordinary differential equation. This approach, while lacking the flexibility and physical details of full MHD simulations, allows for very rapid computation along with complete and rigorous exploration of the model's implications. We find that the model coronal field bulges upward, at first slowly and then more dramatically, in response to footpoint displacements. The energy in the field rises monotonically from that of the initial potential state, and the field configuration and energy appraoch asymptotically that of a fully open field. Concurrently, electric currents develop and concentrate into a current sheet as the limiting case of the open field is approached. Examination of the equations shows rigorously that in the asymptotic limit of the fully open field, the current layer becomes a true ideal MHD singularity.
Math Interventions for Students with Autism Spectrum Disorder: A Best-Evidence Synthesis
ERIC Educational Resources Information Center
King, Seth A.; Lemons, Christopher J.; Davidson, Kimberly A.
2016-01-01
Educators need evidence-based practices to assist students with disabilities in meeting increasingly rigorous standards in mathematics. Students with autism spectrum disorder (ASD) are increasingly expected to demonstrate learning of basic and advanced mathematical concepts. This review identifies math intervention studies involving children and…
Control Engineering, System Theory and Mathematics: The Teacher's Challenge
ERIC Educational Resources Information Center
Zenger, K.
2007-01-01
The principles, difficulties and challenges in control education are discussed and compared to the similar problems in the teaching of mathematics and systems science in general. The difficulties of today's students to appreciate the classical teaching of engineering disciplines, which are based on rigorous and scientifically sound grounds, are…
A Qualitative Approach to Enzyme Inhibition
ERIC Educational Resources Information Center
Waldrop, Grover L.
2009-01-01
Most general biochemistry textbooks present enzyme inhibition by showing how the basic Michaelis-Menten parameters K[subscript m] and V[subscript max] are affected mathematically by a particular type of inhibitor. This approach, while mathematically rigorous, does not lend itself to understanding how inhibition patterns are used to determine the…
ERIC Educational Resources Information Center
Dempsey, Michael
2009-01-01
If students are in an advanced mathematics class, then at some point they enjoyed mathematics and looked forward to learning and practicing it. There is no reason that this passion and enjoyment should ever be lost because the subject becomes more difficult or rigorous. This author, who teaches advanced precalculus to high school juniors,…
Handbook of applied mathematics for engineers and scientists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, M.
1991-12-31
This book is intended to be reference for applications of mathematics in a wide range of topics of interest to engineers and scientists. An unusual feature of this book is that it covers a large number of topics from elementary algebra, trigonometry, and calculus to computer graphics and cybernetics. The level of mathematics covers high school through about the junior level of an engineering curriculum in a major univeristy. Throughout, the emphasis is on applications of mathematics rather than on rigorous proofs.
Ontology-Driven Information Integration
NASA Technical Reports Server (NTRS)
Tissot, Florence; Menzel, Chris
2005-01-01
Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.
A review of the meteorological parameters which affect aerial application
NASA Technical Reports Server (NTRS)
Christensen, L. S.; Frost, W.
1979-01-01
The ambient wind field and temperature gradient were found to be the most important parameters. Investigation results indicated that the majority of meteorological parameters affecting dispersion were interdependent and the exact mechanism by which these factors influence the particle dispersion was largely unknown. The types and approximately ranges of instrumented capabilities for a systematic study of the significant meteorological parameters influencing aerial applications were defined. Current mathematical dispersion models were also briefly reviewed. Unfortunately, a rigorous dispersion model which could be applied to aerial application was not available.
Validation of Fatigue Modeling Predictions in Aviation Operations
NASA Technical Reports Server (NTRS)
Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin
2017-01-01
Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
A finite element-boundary integral method for cavities in a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. However, due to a lack of rigorous mathematical models for conformal antenna arrays, antenna designers resort to measurement and planar antenna concepts for designing non-planar conformal antennas. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We extend this formulation to conformal arrays on large metallic cylinders. In this report, we develop the mathematical formulation. In particular, we discuss the shape functions, the resulting finite elements and the boundary integral equations, and the solution of the conformal finite element-boundary integral system. Some validation results are presented and we further show how this formulation can be applied with minimal computational and memory resources.
Separating intrinsic from extrinsic fluctuations in dynamic biological systems
Paulsson, Johan
2011-01-01
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172
Separating intrinsic from extrinsic fluctuations in dynamic biological systems.
Hilfinger, Andreas; Paulsson, Johan
2011-07-19
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.
Methodological Developments in Geophysical Assimilation Modeling
NASA Astrophysics Data System (ADS)
Christakos, George
2005-06-01
This work presents recent methodological developments in geophysical assimilation research. We revisit the meaning of the term "solution" of a mathematical model representing a geophysical system, and we examine its operational formulations. We argue that an assimilation solution based on epistemic cognition (which assumes that the model describes incomplete knowledge about nature and focuses on conceptual mechanisms of scientific thinking) could lead to more realistic representations of the geophysical situation than a conventional ontologic assimilation solution (which assumes that the model describes nature as is and focuses on form manipulations). Conceptually, the two approaches are fundamentally different. Unlike the reasoning structure of conventional assimilation modeling that is based mainly on ad hoc technical schemes, the epistemic cognition approach is based on teleologic criteria and stochastic adaptation principles. In this way some key ideas are introduced that could open new areas of geophysical assimilation to detailed understanding in an integrated manner. A knowledge synthesis framework can provide the rational means for assimilating a variety of knowledge bases (general and site specific) that are relevant to the geophysical system of interest. Epistemic cognition-based assimilation techniques can produce a realistic representation of the geophysical system, provide a rigorous assessment of the uncertainty sources, and generate informative predictions across space-time. The mathematics of epistemic assimilation involves a powerful and versatile spatiotemporal random field theory that imposes no restriction on the shape of the probability distributions or the form of the predictors (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated) and accounts rigorously for the uncertainty features of the geophysical system. In the epistemic cognition context the assimilation concept may be used to investigate critical issues related to knowledge reliability, such as uncertainty due to model structure error (conceptual uncertainty).
Jones index, secret sharing and total quantum dimension
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter; Osborne, Tobias J.
2017-02-01
We study the total quantum dimension in the thermodynamic limit of topologically ordered systems. In particular, using the anyons (or superselection sectors) of such models, we define a secret sharing scheme, storing information invisible to a malicious party, and argue that the total quantum dimension quantifies how well we can perform this task. We then argue that this can be made mathematically rigorous using the index theory of subfactors, originally due to Jones and later extended by Kosaki and Longo. This theory provides us with a ‘relative entropy’ of two von Neumann algebras and a quantum channel, and we argue how these can be used to quantify how much classical information two parties can hide form an adversary. We also review the total quantum dimension in finite systems, in particular how it relates to topological entanglement entropy. It is known that the latter also has an interpretation in terms of secret sharing schemes, although this is shown by completely different methods from ours. Our work provides a different and independent take on this, which at the same time is completely mathematically rigorous. This complementary point of view might be beneficial, for example, when studying the stability of the total quantum dimension when the system is perturbed.
NASA Astrophysics Data System (ADS)
Danon, Leon; Brooks-Pollock, Ellen
2016-09-01
In their review, Chowell et al. consider the ability of mathematical models to predict early epidemic growth [1]. In particular, they question the central prediction of classical differential equation models that the number of cases grows exponentially during the early stages of an epidemic. Using examples including HIV and Ebola, they argue that classical models fail to capture key qualitative features of early growth and describe a selection of models that do capture non-exponential epidemic growth. An implication of this failure is that predictions may be inaccurate and unusable, highlighting the need for care when embarking upon modelling using classical methodology. There remains a lack of understanding of the mechanisms driving many observed epidemic patterns; we argue that data science should form a fundamental component of epidemic modelling, providing a rigorous methodology for data-driven approaches, rather than trying to enforce established frameworks. The need for refinement of classical models provides a strong argument for the use of data science, to identify qualitative characteristics and pinpoint the mechanisms responsible for the observed epidemic patterns.
¡Enséname! Teaching Each Other to Reason through Math in the Second Grade
ERIC Educational Resources Information Center
Schmitz, Lindsey
2016-01-01
This action research sought to evaluate the effect of peer teaching structures across subgroups of students differentiated by language and mathematical skill ability. These structures were implemented in an effort to maintain mathematical rigor while building my students' academic language capacity. More specifically, the study investigated peer…
ERIC Educational Resources Information Center
Camacho, Erika T.; Holmes, Raquell M.; Wirkus, Stephen A.
2015-01-01
This chapter describes how sustained mentoring together with rigorous collaborative learning and community building contributed to successful mathematical research and individual growth in the Applied Mathematical Sciences Summer Institute (AMSSI), a program that focused on women, underrepresented minorities, and individuals from small teaching…
Water Bottle Designs and Measures
ERIC Educational Resources Information Center
Carmody, Heather Gramberg
2010-01-01
The increase in the diversity of students and the complexity of their needs can be a rich addition to a mathematics classroom. The challenge for teachers is to find a way to include students' interests and creativity in a way that allows for rigorous mathematics. One method of incorporating the diversity is the development of "open-ended…
Time-ordered exponential on the complex plane and Gell-Mann—Low formula as a mathematical theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futakuchi, Shinichiro; Usui, Kouta
2016-04-15
The time-ordered exponential representation of a complex time evolution operator in the interaction picture is studied. Using the complex time evolution, we prove the Gell-Mann—Low formula under certain abstract conditions, in mathematically rigorous manner. We apply the abstract results to quantum electrodynamics with cutoffs.
Science and Mathematics Advanced Placement Exams: Growth and Achievement over Time
ERIC Educational Resources Information Center
Judson, Eugene
2017-01-01
Rapid growth of Advanced Placement (AP) exams in the last 2 decades has been paralleled by national enthusiasm to promote availability and rigor of science, technology, engineering, and mathematics (STEM). Trends were examined in STEM AP to evaluate and compare growth and achievement. Analysis included individual STEM subjects and disaggregation…
A simple model for indentation creep
NASA Astrophysics Data System (ADS)
Ginder, Ryan S.; Nix, William D.; Pharr, George M.
2018-03-01
A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.
Survey of Intermediate Microeconomic Textbooks.
ERIC Educational Resources Information Center
Goulet, Janet C.
1986-01-01
Surveys nine undergraduate microeconomic theory textbooks comprising a representing sample those available. Criteria used were quantity and quality of examples, mathematical rigor, and level of abstraction. (JDH)
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.; Woo, Alex C.; Yu, C. Long
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This is due to the lack of rigorous mathematical models for conformal antenna arrays, and as a result the design of conformal arrays is primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. Herewith we shall extend this formulation for conformal arrays on large metallic cylinders. In this we develop the mathematical formulation. In particular we discuss the finite element equations, the shape elements, and the boundary integral evaluation, and it is shown how this formulation can be applied with minimal computation and memory requirements. The implementation shall be discussed in a later report.
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This was due to the lack of rigorous mathematical models for conformal antenna arrays. As a result, the design of conformal arrays was primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We are extending this formulation to conformal arrays on large metallic cylinders. In doing so, we will develop a mathematical formulation. In particular, we discuss the finite element equations, the shape elements, and the boundary integral evaluation. It is shown how this formulation can be applied with minimal computation and memory requirements.
A Tool for Rethinking Teachers' Questioning
ERIC Educational Resources Information Center
Simpson, Amber; Mokalled, Stefani; Ellenburg, Lou Ann; Che, S. Megan
2014-01-01
In this article, the authors present a tool, the Cognitive Rigor Matrix (CRM; Hess et al. 2009), as a means to analyze and reflect on the type of questions posed by mathematics teachers. This tool is intended to promote and develop higher-order thinking and inquiry through the use of purposeful questions and mathematical tasks. The authors…
Oakland and San Francisco Create Course Pathways through Common Core Mathematics. White Paper
ERIC Educational Resources Information Center
Daro, Phil
2014-01-01
The Common Core State Standards for Mathematics (CCSS-M) set rigorous standards for each of grades 6, 7 and 8. Strategic Education Research Partnership (SERP) has been working with two school districts, Oakland Unified School District and San Francisco Unified School District, to evaluate extant policies and practices and formulate new policies…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimus, Paul William
This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examplesmore » of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.« less
NASA Astrophysics Data System (ADS)
Popa, Alexandru
1998-08-01
Recently we have demonstrated in a mathematical paper the following property: The energy which results from the Schrödinger equation can be rigorously calculated by line integrals of analytical functions, if the Hamilton-Jacobi equation, written for the same system, is satisfied in the space of coordinates by a periodical trajectory. We present now an accurate analysis model of the conservative discrete systems, that is based on this property. The theory is checked for a lot of atomic systems. The experimental data, which are ionization energies, are taken from well known books.
Gibiansky, Leonid; Gibiansky, Ekaterina
2018-02-01
The emerging discipline of mathematical pharmacology occupies the space between advanced pharmacometrics and systems biology. A characteristic feature of the approach is application of advance mathematical methods to study the behavior of biological systems as described by mathematical (most often differential) equations. One of the early application of mathematical pharmacology (that was not called this name at the time) was formulation and investigation of the target-mediated drug disposition (TMDD) model and its approximations. The model was shown to be remarkably successful, not only in describing the observed data for drug-target interactions, but also in advancing the qualitative and quantitative understanding of those interactions and their role in pharmacokinetic and pharmacodynamic properties of biologics. The TMDD model in its original formulation describes the interaction of the drug that has one binding site with the target that also has only one binding site. Following the framework developed earlier for drugs with one-to-one binding, this work aims to describe a rigorous approach for working with similar systems and to apply it to drugs that bind to targets with two binding sites. The quasi-steady-state, quasi-equilibrium, irreversible binding, and Michaelis-Menten approximations of the model are also derived. These equations can be used, in particular, to predict concentrations of the partially bound target (RC). This could be clinically important if RC remains active and has slow internalization rate. In this case, introduction of the drug aimed to suppress target activity may lead to the opposite effect due to RC accumulation.
Bardhan, Jaydeep P; Knepley, Matthew G
2011-09-28
We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1997-01-01
Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.
On Modeling and Analysis of MIMO Wireless Mesh Networks with Triangular Overlay Topology
Cao, Zhanmao; Wu, Chase Q.; Zhang, Yuanping; ...
2015-01-01
Multiple input multiple output (MIMO) wireless mesh networks (WMNs) aim to provide the last-mile broadband wireless access to the Internet. Along with the algorithmic development for WMNs, some fundamental mathematical problems also emerge in various aspects such as routing, scheduling, and channel assignment, all of which require an effective mathematical model and rigorous analysis of network properties. In this paper, we propose to employ Cartesian product of graphs (CPG) as a multichannel modeling approach and explore a set of unique properties of triangular WMNs. In each layer of CPG with a single channel, we design a node coordinate scheme thatmore » retains the symmetric property of triangular meshes and develop a function for the assignment of node identity numbers based on their coordinates. We also derive a necessary-sufficient condition for interference-free links and combinatorial formulas to determine the number of the shortest paths for channel realization in triangular WMNs.« less
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Chizhik, Stanislav; Sidelnikov, Anatoly; Zakharov, Boris; Naumov, Panče; Boldyreva, Elena
2018-02-28
Photomechanically reconfigurable elastic single crystals are the key elements for contactless, timely controllable and spatially resolved transduction of light into work from the nanoscale to the macroscale. The deformation in such single-crystal actuators is observed and usually attributed to anisotropy in their structure induced by the external stimulus. Yet, the actual intrinsic and external factors that affect the mechanical response remain poorly understood, and the lack of rigorous models stands as the main impediment towards benchmarking of these materials against each other and with much better developed soft actuators based on polymers, liquid crystals and elastomers. Here, experimental approaches for precise measurement of macroscopic strain in a single crystal bent by means of a solid-state transformation induced by light are developed and used to extract the related temperature-dependent kinetic parameters. The experimental results are compared against an overarching mathematical model based on the combined consideration of light transport, chemical transformation and elastic deformation that does not require fitting of any empirical information. It is demonstrated that for a thermally reversible photoreactive bending crystal, the kinetic constants of the forward (photochemical) reaction and the reverse (thermal) reaction, as well as their temperature dependence, can be extracted with high accuracy. The improved kinematic model of crystal bending takes into account the feedback effect, which is often neglected but becomes increasingly important at the late stages of the photochemical reaction in a single crystal. The results provide the most rigorous and exact mathematical description of photoinduced bending of a single crystal to date.
Single toxin dose-response models revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
NASA Astrophysics Data System (ADS)
Pereyra, Nicolas A.
2018-06-01
This book gives a rigorous yet 'physics-focused' introduction to mathematical logic that is geared towards natural science majors. We present the science major with a robust introduction to logic, focusing on the specific knowledge and skills that will unavoidably be needed in calculus topics and natural science topics in general (rather than taking a philosophical-math-fundamental oriented approach that is commonly found in mathematical logic textbooks).
13th Annual Systems Engineering Conference: Tues- Wed
2010-10-28
greater understanding/documentation of lessons learned – Promotes SE within the organization • Justification for continued funding of SE Infrastructure...educational process – Addresses the development of innovative learning tools, strategies, and teacher training • Research and Development – Promotes ...technology, and mathematics • More commitment to engaging young students in science, engineering, technology and mathematics • More rigor in defining
Weiland, Christina
2016-11-01
Theory and empirical work suggest inclusion preschool improves the school readiness of young children with special needs, but only 2 studies of the model have used rigorous designs that could identify causality. The present study examined the impacts of the Boston Public prekindergarten program-which combined proven language, literacy, and mathematics curricula with coaching-on the language, literacy, mathematics, executive function, and emotional skills of young children with special needs (N = 242). Children with special needs benefitted from the program in all examined domains. Effects were on par with or surpassed those of their typically developing peers. Results are discussed in the context of their relevance for policy, practice, and theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lenas, Petros; Moos, Malcolm; Luyten, Frank P
2009-12-01
The field of tissue engineering is moving toward a new concept of "in vitro biomimetics of in vivo tissue development." In Part I of this series, we proposed a theoretical framework integrating the concepts of developmental biology with those of process design to provide the rules for the design of biomimetic processes. We named this methodology "developmental engineering" to emphasize that it is not the tissue but the process of in vitro tissue development that has to be engineered. To formulate the process design rules in a rigorous way that will allow a computational design, we should refer to mathematical methods to model the biological process taking place in vitro. Tissue functions cannot be attributed to individual molecules but rather to complex interactions between the numerous components of a cell and interactions between cells in a tissue that form a network. For tissue engineering to advance to the level of a technologically driven discipline amenable to well-established principles of process engineering, a scientifically rigorous formulation is needed of the general design rules so that the behavior of networks of genes, proteins, or cells that govern the unfolding of developmental processes could be related to the design parameters. Now that sufficient experimental data exist to construct plausible mathematical models of many biological control circuits, explicit hypotheses can be evaluated using computational approaches to facilitate process design. Recent progress in systems biology has shown that the empirical concepts of developmental biology that we used in Part I to extract the rules of biomimetic process design can be expressed in rigorous mathematical terms. This allows the accurate characterization of manufacturing processes in tissue engineering as well as the properties of the artificial tissues themselves. In addition, network science has recently shown that the behavior of biological networks strongly depends on their topology and has developed the necessary concepts and methods to describe it, allowing therefore a deeper understanding of the behavior of networks during biomimetic processes. These advances thus open the door to a transition for tissue engineering from a substantially empirical endeavor to a technology-based discipline comparable to other branches of engineering.
NASA Astrophysics Data System (ADS)
LeBeau, Brandon; Harwell, Michael; Monson, Debra; Dupuis, Danielle; Medhanie, Amanuel; Post, Thomas R.
2012-04-01
Background: The importance of increasing the number of US college students completing degrees in science, technology, engineering or mathematics (STEM) has prompted calls for research to provide a better understanding of factors related to student participation in these majors, including the impact of a student's high-school mathematics curriculum. Purpose: This study examines the relationship between various student and high-school characteristics and completion of a STEM major in college. Of specific interest is the influence of a student's high-school mathematics curriculum on the completion of a STEM major in college. Sample: The sample consisted of approximately 3500 students from 229 high schools. Students were predominantly Caucasian (80%), with slightly more males than females (52% vs 48%). Design and method: A quasi-experimental design with archival data was used for students who enrolled in, and graduated from, a post-secondary institution in the upper Midwest. To be included in the sample, students needed to have completed at least three years of high-school mathematics. A generalized linear mixed model was used with students nested within high schools. The data were cross-sectional. Results: High-school predictors were not found to have a significant impact on the completion of a STEM major. Significant student-level predictors included ACT mathematics score, gender and high-school mathematics GPA. Conclusions: The results provide evidence that on average students are equally prepared for the rigorous mathematics coursework regardless of the high-school mathematics curriculum they completed.
On Mathematical Modeling Of Quantum Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achuthan, P.; Dept. of Mathematics, Indian Institute of Technology, Madras, 600 036; Narayanankutty, Karuppath
2009-07-02
The world of physical systems at the most fundamental levels is replete with efficient, interesting models possessing sufficient ability to represent the reality to a considerable extent. So far, quantum mechanics (QM) forming the basis of almost all natural phenomena, has found beyond doubt its intrinsic ingenuity, capacity and robustness to stand the rigorous tests of validity from and through appropriate calculations and experiments. No serious failures of quantum mechanical predictions have been reported, yet. However, Albert Einstein, the greatest theoretical physicist of the twentieth century and some other eminent men of science have stated firmly and categorically that QM,more » though successful by and large, is incomplete. There are classical and quantum reality models including those based on consciousness. Relativistic quantum theoretical approaches to clearly understand the ultimate nature of matter as well as radiation have still much to accomplish in order to qualify for a final theory of everything (TOE). Mathematical models of better, suitable character as also strength are needed to achieve satisfactory explanation of natural processes and phenomena. We, in this paper, discuss some of these matters with certain apt illustrations as well.« less
ERIC Educational Resources Information Center
Mattson, Beverly
2011-01-01
One of the competitive priorities of the U.S. Department of Education's Race to the Top applications addressed science, technology, engineering, and mathematics (STEM). States that applied were required to submit plans that addressed rigorous courses of study, cooperative partnerships to prepare and assist teachers in STEM content, and prepare…
Threshold for extinction and survival in stochastic tumor immune system
NASA Astrophysics Data System (ADS)
Li, Dongxi; Cheng, Fangjuan
2017-10-01
This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.
Modeling of composite beams and plates for static and dynamic analysis
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Atilgan, Ali R.; Lee, Bok Woo
1990-01-01
A rigorous theory and corresponding computational algorithms was developed for a variety of problems regarding the analysis of composite beams and plates. The modeling approach is intended to be applicable to both static and dynamic analysis of generally anisotropic, nonhomogeneous beams and plates. Development of a theory for analysis of the local deformation of plates was the major focus. Some work was performed on global deformation of beams. Because of the strong parallel between beams and plates, the two were treated together as thin bodies, especially in cases where it will clarify the meaning of certain terminology and the motivation behind certain mathematical operations.
Seismic waves and earthquakes in a global monolithic model
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Safety Verification of the Small Aircraft Transportation System Concept of Operations
NASA Technical Reports Server (NTRS)
Carreno, Victor; Munoz, Cesar
2005-01-01
A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks
Besada, Juan A.
2017-01-01
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157
Historical mathematics in the French eighteenth century.
Richards, Joan L
2006-12-01
At least since the seventeenth century, the strange combination of epistemological certainty and ontological power that characterizes mathematics has made it a major focus of philosophical, social, and cultural negotiation. In the eighteenth century, all of these factors were at play as mathematical thinkers struggled to assimilate and extend the analysis they had inherited from the seventeenth century. A combination of educational convictions and historical assumptions supported a humanistic mathematics essentially defined by its flexibility and breadth. This mathematics was an expression of l'esprit humain, which was unfolding in a progressive historical narrative. The French Revolution dramatically altered the historical and educational landscapes that had supported this eighteenth-century approach, and within thirty years Augustin Louis Cauchy had radically reconceptualized and restructured mathematics to be rigorous rather than narrative.
A primer on thermodynamic-based models for deciphering transcriptional regulatory logic.
Dresch, Jacqueline M; Richards, Megan; Ay, Ahmet
2013-09-01
A rigorous analysis of transcriptional regulation at the DNA level is crucial to the understanding of many biological systems. Mathematical modeling has offered researchers a new approach to understanding this central process. In particular, thermodynamic-based modeling represents the most biophysically informed approach aimed at connecting DNA level regulatory sequences to the expression of specific genes. The goal of this review is to give biologists a thorough description of the steps involved in building, analyzing, and implementing a thermodynamic-based model of transcriptional regulation. The data requirements for this modeling approach are described, the derivation for a specific regulatory region is shown, and the challenges and future directions for the quantitative modeling of gene regulation are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Multi-Disciplinary Knowledge Synthesis for Human Health Assessment on Earth and in Space
NASA Astrophysics Data System (ADS)
Christakos, G.
We discuss methodological developments in multi-disciplinary knowledge synthesis (KS) of human health assessment. A theoretical KS framework can provide the rational means for the assimilation of various information bases (general, site-specific etc.) that are relevant to the life system of interest. KS-based techniques produce a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, and generate informative health state predictions across space-time. The underlying epistemic cognition methodology is based on teleologic criteria and stochastic logic principles. The mathematics of KS involves a powerful and versatile spatiotemporal random field model that accounts rigorously for the uncertainty features of the life system and imposes no restriction on the shape of the probability distributions or the form of the predictors. KS theory is instrumental in understanding natural heterogeneities, assessing crucial human exposure correlations and laws of physical change, and explaining toxicokinetic mechanisms and dependencies in a spatiotemporal life system domain. It is hoped that a better understanding of KS fundamentals would generate multi-disciplinary models that are useful for the maintenance of human health on Earth and in Space.
Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications
Han, Katherine; Chang, Chih-Hung
2014-01-01
This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287
34 CFR 691.16 - Rigorous secondary school program of study.
Code of Federal Regulations, 2010 CFR
2010-07-01
... MATHEMATICS ACCESS TO RETAIN TALENT GRANT (NATIONAL SMART GRANT) PROGRAMS Application Procedures § 691.16..., 2009. (Approved by the Office of Management and Budget under control number 1845-0078] (Authority: 20 U...
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
NASA Astrophysics Data System (ADS)
Tariq, Imran; Humbert-Vidan, Laia; Chen, Tao; South, Christopher P.; Ezhil, Veni; Kirkby, Norman F.; Jena, Rajesh; Nisbet, Andrew
2015-05-01
This paper reports a modelling study of tumour volume dynamics in response to stereotactic ablative radiotherapy (SABR). The main objective was to develop a model that is adequate to describe tumour volume change measured during SABR, and at the same time is not excessively complex as lacking support from clinical data. To this end, various modelling options were explored, and a rigorous statistical method, the Akaike information criterion, was used to help determine a trade-off between model accuracy and complexity. The models were calibrated to the data from 11 non-small cell lung cancer patients treated with SABR. The results showed that it is feasible to model the tumour volume dynamics during SABR, opening up the potential for using such models in a clinical environment in the future.
Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.
Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul
2013-10-01
Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Achieve, Inc., 2007
2007-01-01
At the request of the Hawaii Department of Education, Achieve conducted a study of Hawaii's 2005 grade 10 State Assessment in reading and mathematics. The study compared the content, rigor and passing (meets proficiency) scores on Hawaii's assessment with those of the six states that participated in Achieve's earlier study, "Do Graduation…
Consistent Chemical Mechanism from Collaborative Data Processing
Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...
2016-04-01
Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less
Steady-state and dynamic models for particle engulfment during solidification
NASA Astrophysics Data System (ADS)
Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.
2016-06-01
Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.
Model of dissolution in the framework of tissue engineering and drug delivery.
Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R
2018-05-22
Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
Parker, Aimée; Pin, Carmen; Carding, Simon R.; Watson, Alastair J. M.; Byrne, Helen M.
2017-01-01
Our work addresses two key challenges, one biological and one methodological. First, we aim to understand how proliferation and cell migration rates in the intestinal epithelium are related under healthy, damaged (Ara-C treated) and recovering conditions, and how these relations can be used to identify mechanisms of repair and regeneration. We analyse new data, presented in more detail in a companion paper, in which BrdU/IdU cell-labelling experiments were performed under these respective conditions. Second, in considering how to more rigorously process these data and interpret them using mathematical models, we use a probabilistic, hierarchical approach. This provides a best-practice approach for systematically modelling and understanding the uncertainties that can otherwise undermine the generation of reliable conclusions—uncertainties in experimental measurement and treatment, difficult-to-compare mathematical models of underlying mechanisms, and unknown or unobserved parameters. Both spatially discrete and continuous mechanistic models are considered and related via hierarchical conditional probability assumptions. We perform model checks on both in-sample and out-of-sample datasets and use them to show how to test possible model improvements and assess the robustness of our conclusions. We conclude, for the present set of experiments, that a primarily proliferation-driven model suffices to predict labelled cell dynamics over most time-scales. PMID:28753601
Maclaren, Oliver J; Parker, Aimée; Pin, Carmen; Carding, Simon R; Watson, Alastair J M; Fletcher, Alexander G; Byrne, Helen M; Maini, Philip K
2017-07-01
Our work addresses two key challenges, one biological and one methodological. First, we aim to understand how proliferation and cell migration rates in the intestinal epithelium are related under healthy, damaged (Ara-C treated) and recovering conditions, and how these relations can be used to identify mechanisms of repair and regeneration. We analyse new data, presented in more detail in a companion paper, in which BrdU/IdU cell-labelling experiments were performed under these respective conditions. Second, in considering how to more rigorously process these data and interpret them using mathematical models, we use a probabilistic, hierarchical approach. This provides a best-practice approach for systematically modelling and understanding the uncertainties that can otherwise undermine the generation of reliable conclusions-uncertainties in experimental measurement and treatment, difficult-to-compare mathematical models of underlying mechanisms, and unknown or unobserved parameters. Both spatially discrete and continuous mechanistic models are considered and related via hierarchical conditional probability assumptions. We perform model checks on both in-sample and out-of-sample datasets and use them to show how to test possible model improvements and assess the robustness of our conclusions. We conclude, for the present set of experiments, that a primarily proliferation-driven model suffices to predict labelled cell dynamics over most time-scales.
Which Kind of Mathematics for Quantum Mechanics? the Relevance of H. Weyl's Program of Research
NASA Astrophysics Data System (ADS)
Drago, Antonino
In 1918 Weyl's book Das Kontinuum planned to found anew mathematics upon more conservative bases than both rigorous mathematics and set theory. It gave birth to the so-called Weyl's elementary mathematics, i.e. an intermediate mathematics between the mathematics rejecting at all actual infinity and the classical one including it almost freely. The present paper scrutinises the subsequent Weyl's book Gruppentheorie und Quantenmechanik (1928) as a program for founding anew theoretical physics - through quantum theory - and at the same time developing his mathematics through an improvement of group theory; which, according to Weyl, is a mathematical theory effacing the old distinction between discrete and continuous mathematics. Evidence from Weyl's writings is collected for supporting this interpretation. Then Weyl's program is evaluated as unsuccessful, owing to some crucial difficulties of both physical and mathematical nature. The present clear-cut knowledge of Weyl's elementary mathematics allows us to re-evaluate Weyl's program in order to look for more adequate formulations of quantum mechanics in any weaker kind of mathematics than the classical one.
A single-cell spiking model for the origin of grid-cell patterns
Kempter, Richard
2017-01-01
Spatial cognition in mammals is thought to rely on the activity of grid cells in the entorhinal cortex, yet the fundamental principles underlying the origin of grid-cell firing are still debated. Grid-like patterns could emerge via Hebbian learning and neuronal adaptation, but current computational models remained too abstract to allow direct confrontation with experimental data. Here, we propose a single-cell spiking model that generates grid firing fields via spike-rate adaptation and spike-timing dependent plasticity. Through rigorous mathematical analysis applicable in the linear limit, we quantitatively predict the requirements for grid-pattern formation, and we establish a direct link to classical pattern-forming systems of the Turing type. Our study lays the groundwork for biophysically-realistic models of grid-cell activity. PMID:28968386
On the relation between phase-field crack approximation and gradient damage modelling
NASA Astrophysics Data System (ADS)
Steinke, Christian; Zreid, Imadeddin; Kaliske, Michael
2017-05-01
The finite element implementation of a gradient enhanced microplane damage model is compared to a phase-field model for brittle fracture. Phase-field models and implicit gradient damage models share many similarities despite being conceived from very different standpoints. In both approaches, an additional differential equation and a length scale are introduced. However, while the phase-field method is formulated starting from the description of a crack in fracture mechanics, the gradient method starts from a continuum mechanics point of view. At first, the scope of application for both models is discussed to point out intersections. Then, the analysis of the employed mathematical methods and their rigorous comparison are presented. Finally, numerical examples are introduced to illustrate the findings of the comparison which are summarized in a conclusion at the end of the paper.
Inflammation and immune system activation in aging: a mathematical approach.
Nikas, Jason B
2013-11-19
Memory and learning declines are consequences of normal aging. Since those functions are associated with the hippocampus, I analyzed the global gene expression data from post-mortem hippocampal tissue of 25 old (age ≥ 60 yrs) and 15 young (age ≤ 45 yrs) cognitively intact human subjects. By employing a rigorous, multi-method bioinformatic approach, I identified 36 genes that were the most significant in terms of differential expression; and by employing mathematical modeling, I demonstrated that 7 of the 36 genes were able to discriminate between the old and young subjects with high accuracy. Remarkably, 90% of the known genes from those 36 most significant genes are associated with either inflammation or immune system activation. This suggests that chronic inflammation and immune system over-activity may underlie the aging process of the human brain, and that potential anti-inflammatory treatments targeting those genes may slow down this process and alleviate its symptoms.
MAESTRO: Mathematics and Earth Science Teachers' Resource Organization
NASA Astrophysics Data System (ADS)
Courtier, A. M.; Pyle, E. J.; Fichter, L.; Lucas, S.; Jackson, A.
2013-12-01
The Mathematics and Earth Science Teachers' Resource Organization (MAESTRO) partnership between James Madison University and Harrisonburg City and Page County Public Schools, funded through NSF-GEO. The partnership aims to transform mathematics and Earth science instruction in middle and high schools by developing an integrated mathematics and Earth systems science approach to instruction. This curricular integration is intended to enhance the mathematical skills and confidence of students through concrete, Earth systems-based examples, while increasing the relevance and rigor of Earth science instruction via quantification and mathematical modeling of Earth system phenomena. MAESTRO draws heavily from the Earth Science Literacy Initiative (2009) and is informed by criterion-level standardized test performance data in both mathematics and Earth science. The project has involved two summer professional development workshops, academic year Lesson Study (structured teacher observation and reflection), and will incorporate site-based case studies with direct student involvement. Participating teachers include Grade 6 Science and Mathematics teachers, and Grade 9 Earth Science and Algebra teachers. It is anticipated that the proposed integration across grade bands will first strengthen students' interests in mathematics and science (a problem in middle school) and subsequently reinforce the relevance of mathematics and other sciences (a problem in high school), both in support of Earth systems literacy. MAESTRO's approach to the integration of math and science focuses on using box models to emphasize the interconnections among the geo-, atmo-, bio-, and hydrospheres, and demonstrates the positive and negative feedback processes that connect their mutual evolution. Within this framework we explore specific relationships that can be described both qualitatively and mathematically, using mathematical operations appropriate for each grade level. Site-based case studies, developed in collaboration between teachers and JMU faculty members, provide a tangible, relevant setting in which students can apply and understand mathematical applications and scientific processes related to evolving Earth systems. Initial results from student questionnaires and teacher focus groups suggest that the anticipated impacts of MAESTRO on students are being realized, including increased valuing of mathematics and Earth science in society and transfer between mathematics and science courses. As a high percentage of students in the MAESTRO schools are of low socio-economic status, they also face the prospect of becoming first-generation college students, hopefully considering STEM academic pathways. MAESTRO will drive the development of challenging and engaging instruction designed to draw a larger pool of students into STEM career pathways.
Mathematical Analysis of a Coarsening Model with Local Interactions
NASA Astrophysics Data System (ADS)
Helmers, Michael; Niethammer, Barbara; Velázquez, Juan J. L.
2016-10-01
We consider particles on a one-dimensional lattice whose evolution is governed by nearest-neighbor interactions where particles that have reached size zero are removed from the system. Concentrating on configurations with infinitely many particles, we prove existence of solutions under a reasonable density assumption on the initial data and show that the vanishing of particles and the localized interactions can lead to non-uniqueness. Moreover, we provide a rigorous upper coarsening estimate and discuss generic statistical properties as well as some non-generic behavior of the evolution by means of heuristic arguments and numerical observations.
NASA Astrophysics Data System (ADS)
Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang
2018-04-01
In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.
Montévil, Maël; Speroni, Lucia; Sonnenschein, Carlos; Soto, Ana M
2016-10-01
In multicellular organisms, relations among parts and between parts and the whole are contextual and interdependent. These organisms and their cells are ontogenetically linked: an organism starts as a cell that divides producing non-identical cells, which organize in tri-dimensional patterns. These association patterns and cells types change as tissues and organs are formed. This contextuality and circularity makes it difficult to establish detailed cause and effect relationships. Here we propose an approach to overcome these intrinsic difficulties by combining the use of two models; 1) an experimental one that employs 3D culture technology to obtain the structures of the mammary gland, namely, ducts and acini, and 2) a mathematical model based on biological principles. The typical approach for mathematical modeling in biology is to apply mathematical tools and concepts developed originally in physics or computer sciences. Instead, we propose to construct a mathematical model based on proper biological principles. Specifically, we use principles identified as fundamental for the elaboration of a theory of organisms, namely i) the default state of cell proliferation with variation and motility and ii) the principle of organization by closure of constraints. This model has a biological component, the cells, and a physical component, a matrix which contains collagen fibers. Cells display agency and move and proliferate unless constrained; they exert mechanical forces that i) act on collagen fibers and ii) on other cells. As fibers organize, they constrain the cells on their ability to move and to proliferate. The model exhibits a circularity that can be interpreted in terms of closure of constraints. Implementing the mathematical model shows that constraints to the default state are sufficient to explain ductal and acinar formation, and points to a target of future research, namely, to inhibitors of cell proliferation and motility generated by the epithelial cells. The success of this model suggests a step-wise approach whereby additional constraints imposed by the tissue and the organism could be examined in silico and rigorously tested by in vitro and in vivo experiments, in accordance with the organicist perspective we embrace. Copyright © 2016. Published by Elsevier Ltd.
Montévil, Maël; Speroni, Lucia; Sonnenschein, Carlos; Soto, Ana M.
2017-01-01
In multicellular organisms, relations among parts and between parts and the whole are contextual and interdependent. These organisms and their cells are ontogenetically linked: an organism starts as a cell that divides producing non-identical cells, which organize in tri-dimensional patterns. These association patterns and cells types change as tissues and organs are formed. This contextuality and circularity makes it difficult to establish detailed cause and effect relationships. Here we propose an approach to overcome these intrinsic difficulties by combining the use of two models; 1) an experimental one that employs 3D culture technology to obtain the structures of the mammary gland, namely, ducts and acini, and 2) a mathematical model based on biological principles. The typical approach for mathematical modeling in biology is to apply mathematical tools and concepts developed originally in physics or computer sciences. Instead, we propose to construct a mathematical model based on proper biological principles. Specifically, we use principles identified as fundamental for the elaboration of a theory of organisms, namely i) the default state of cell proliferation with variation and motility and ii) the principle of organization by closure of constraints. This model has a biological component, the cells, and a physical component, a matrix which contains collagen fibers. Cells display agency and move and proliferate unless constrained; they exert mechanical forces that i) act on collagen fibers and ii) on other cells. As fibers organize, they constrain the cells on their ability to move and to proliferate. The model exhibits a circularity that can be interpreted in terms of closure of constraints. Implementing the mathematical model shows that constraints to the default state are sufficient to explain ductal and acinar formation, and points to a target of future research, namely, to inhibitors of cell proliferation and motility generated by the epithelial cells. The success of this model suggests a step-wise approach whereby additional constraints imposed by the tissue and the organism could be examined in silico and rigorously tested by in vitro and in vivo experiments, in accordance with the organicist perspective we embrace. PMID:27544910
Mathematical Models for Controlled Drug Release Through pH-Responsive Polymeric Hydrogels.
Manga, Ramya D; Jha, Prateek K
2017-02-01
Hydrogels consisting of weakly charged acidic/basic groups are ideal candidates for carriers in oral delivery, as they swell in response to pH changes in the gastrointestinal tract, resulting in drug entrapment at low pH conditions of the stomach and drug release at high pH conditions of the intestine. We have developed 1-dimensional mathematical models to study the drug release behavior through pH-responsive hydrogels. Models are developed for 3 different cases that vary in the level of rigor, which together can be applied to predict both in vitro (drug release from carrier) and in vivo (drug concentration in the plasma) behavior of hydrogel-drug formulations. A detailed study of the effect of hydrogel and drug characteristics and physiological conditions is performed to gain a fundamental insight into the drug release behavior, which may be useful in the design of pH-responsive drug carriers. Finally, we describe a successful application of these models to predict both in vitro and in vivo behavior of docetaxel-loaded micelle in a pH-responsive hydrogel, as reported in a recent experimental study. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Weber, Gerhard-Wilhelm; Ozöğür-Akyüz, Süreyya; Kropat, Erik
2009-06-01
An emerging research area in computational biology and biotechnology is devoted to mathematical modeling and prediction of gene-expression patterns; it nowadays requests mathematics to deeply understand its foundations. This article surveys data mining and machine learning methods for an analysis of complex systems in computational biology. It mathematically deepens recent advances in modeling and prediction by rigorously introducing the environment and aspects of errors and uncertainty into the genetic context within the framework of matrix and interval arithmetics. Given the data from DNA microarray experiments and environmental measurements, we extract nonlinear ordinary differential equations which contain parameters that are to be determined. This is done by a generalized Chebychev approximation and generalized semi-infinite optimization. Then, time-discretized dynamical systems are studied. By a combinatorial algorithm which constructs and follows polyhedra sequences, the region of parametric stability is detected. In addition, we analyze the topological landscape of gene-environment networks in terms of structural stability. As a second strategy, we will review recent model selection and kernel learning methods for binary classification which can be used to classify microarray data for cancerous cells or for discrimination of other kind of diseases. This review is practically motivated and theoretically elaborated; it is devoted to a contribution to better health care, progress in medicine, a better education, and more healthy living conditions.
NASA Technical Reports Server (NTRS)
Chen, Wei; Tsui, Kwok-Leung; Allen, Janet K.; Mistree, Farrokh
1994-01-01
In this paper we introduce a comprehensive and rigorous robust design procedure to overcome some limitations of the current approaches. A comprehensive approach is general enough to model the two major types of robust design applications, namely, robust design associated with the minimization of the deviation of performance caused by the deviation of noise factors (uncontrollable parameters), and robust design due to the minimization of the deviation of performance caused by the deviation of control factors (design variables). We achieve mathematical rigor by using, as a foundation, principles from the design of experiments and optimization. Specifically, we integrate the Response Surface Method (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example. Our focus in this paper is on illustrating our approach rather than on the results per se.
A Rigorous Geometric Derivation of the Chiral Anomaly in Curved Backgrounds
NASA Astrophysics Data System (ADS)
Bär, Christian; Strohmaier, Alexander
2016-11-01
We discuss the chiral anomaly for a Weyl field in a curved background and show that a novel index theorem for the Lorentzian Dirac operator can be applied to describe the gravitational chiral anomaly. A formula for the total charge generated by the gravitational and gauge field background is derived directly in Lorentzian signature and in a mathematically rigorous manner. It contains a term identical to the integrand in the Atiyah-Singer index theorem and another term involving the {η}-invariant of the Cauchy hypersurfaces.
Mathematics make microbes beautiful, beneficial, and bountiful.
Jungck, John R
2012-01-01
Microbiology is a rich area for visualizing the importance of mathematics in terms of designing experiments, data mining, testing hypotheses, and visualizing relationships. Historically, Nobel Prizes have acknowledged the close interplay between mathematics and microbiology in such examples as the fluctuation test and mutation rates using Poisson statistics by Luria and Delbrück and the use of graph theory of polyhedra by Caspar and Klug. More and more contemporary microbiology journals feature mathematical models, computational algorithms and heuristics, and multidimensional visualizations. While revolutions in research have driven these initiatives, a commensurate effort needs to be made to incorporate much more mathematics into the professional preparation of microbiologists. In order not to be daunting to many educators, a Bloom-like "Taxonomy of Quantitative Reasoning" is shared with explicit examples of microbiological activities for engaging students in (a) counting, measuring, calculating using image analysis of bacterial colonies and viral infections on variegated leaves, measurement of fractal dimensions of beautiful colony morphologies, and counting vertices, edges, and faces on viral capsids and using graph theory to understand self assembly; (b) graphing, mapping, ordering by applying linear, exponential, and logistic growth models of public health and sanitation problems, revisiting Snow's epidemiological map of cholera with computational geometry, and using interval graphs to do complementation mapping, deletion mapping, food webs, and microarray heatmaps; (c) problem solving by doing gene mapping and experimental design, and applying Boolean algebra to gene regulation of operons; (d) analysis of the "Bacterial Bonanza" of microbial sequence and genomic data using bioinformatics and phylogenetics; (e) hypothesis testing-again with phylogenetic trees and use of Poisson statistics and the Luria-Delbrück fluctuation test; and (f) modeling of biodiversity by using game theory, of epidemics with algebraic models, bacterial motion by using motion picture analysis and fluid mechanics of motility in multiple dimensions through the physics of "Life at Low Reynolds Numbers," and pattern formation of quorum sensing bacterial populations. Through a developmental model for preprofessional education that emphasizes the beauty, utility, and diversity of microbiological systems, we hope to foster creativity as well as mathematically rigorous reasoning. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sussman, Joshua Michael
This three-paper dissertation explores problems with the use of standardized tests as outcome measures for the evaluation of instructional interventions in mathematics and science. Investigators commonly use students' scores on standardized tests to evaluate the impact of instructional programs designed to improve student achievement. However, evidence suggests that the standardized tests may not measure, or may not measure well, the student learning caused by the interventions. This problem is special case of a basic problem in applied measurement related to understanding whether a particular test provides accurate and useful information about the impact of an educational intervention. The three papers explore different aspects of the issue and highlight the potential benefits of (a) using particular research methods and of (b) implementing changes to educational policy that would strengthen efforts to reform instructional intervention in mathematics and science. The first paper investigates measurement problems related to the use of standardized tests in applied educational research. Analysis of the research projects funded by the Institute of Education Sciences (IES) Mathematics and Science Education Program permitted me to address three main research questions. One, how often are standardized tests used to evaluate new educational interventions? Two, do the tests appear to measure the same thing that the intervention teaches? Three, do investigators establish validity evidence for the specific uses of the test? The research documents potential problems and actual problems related to the use of standardized tests in leading applied research, and suggests changes to policy that would address measurement issues and improve the rigor of applied educational research. The second paper explores the practical consequences of misalignment between an outcome measure and an educational intervention in the context of summative evaluation. Simulated evaluation data and a psychometric model of alignment grounded in item response modeling generate the results that address the following research question: how do differences between what a test measures and what an intervention teaches influence the results of an evaluation? The simulation derives a functional relationship between alignment, defined as the match between the test and the intervention, and treatment sensitivity, defined as the statistical power for detecting the impact of an intervention. The paper presents a new model of the effect of misalignment on the results of an evaluation and recommendations for outcome measure selection. The third paper documents the educational effectiveness of the Learning Mathematics through Representations (LMR) lesson sequence for students classified as English Learners (ELs). LMR is a research-based curricular unit designed to support upper elementary students' understandings of integers and fractions, areas considered foundational for the development of higher mathematics. The experimental evaluation contains a multilevel analysis of achievement data from two assessments: a standardized test and a researcher-developed assessment. The study coordinates the two sources of research data with a theoretical mechanism of action in order to rigorously document the effectiveness and educational equity of LMR for ELs using multiple sources of information.
Vinyard, David J; Zachary, Chase E; Ananyev, Gennady; Dismukes, G Charles
2013-07-01
Forty-three years ago, Kok and coworkers introduced a phenomenological model describing period-four oscillations in O2 flash yields during photosynthetic water oxidation (WOC), which had been first reported by Joliot and coworkers. The original two-parameter Kok model was subsequently extended in its level of complexity to better simulate diverse data sets, including intact cells and isolated PSII-WOCs, but at the expense of introducing physically unrealistic assumptions necessary to enable numerical solutions. To date, analytical solutions have been found only for symmetric Kok models (inefficiencies are equally probable for all intermediates, called "S-states"). However, it is widely accepted that S-state reaction steps are not identical and some are not reversible (by thermodynamic restraints) thereby causing asymmetric cycles. We have developed a mathematically more rigorous foundation that eliminates unphysical assumptions known to be in conflict with experiments and adopts a new experimental constraint on solutions. This new algorithm termed STEAMM for S-state Transition Eigenvalues of Asymmetric Markov Models enables solutions to models having fewer adjustable parameters and uses automated fitting to experimental data sets, yielding higher accuracy and precision than the classic Kok or extended Kok models. This new tool provides a general mathematical framework for analyzing damped oscillations arising from any cycle period using any appropriate Markov model, regardless of symmetry. We illustrate applications of STEAMM that better describe the intrinsic inefficiencies for photon-to-charge conversion within PSII-WOCs that are responsible for damped period-four and period-two oscillations of flash O2 yields across diverse species, while using simpler Markov models free from unrealistic assumptions. Copyright © 2013 Elsevier B.V. All rights reserved.
On decentralized design: Rationale, dynamics, and effects on decision-making
NASA Astrophysics Data System (ADS)
Chanron, Vincent
The focus of this dissertation is the design of complex systems, including engineering systems such as cars, airplanes, and satellites. Companies who design these systems are under constant pressure to design better products that meet customer expectations, and competition forces them to develop them faster. One of the responses of the industry to these conflicting challenges has been the decentralization of the design responsibilities. The current lack of understanding of the dynamics of decentralized design processes is the main motivation for this research, and places value on the descriptive base. It identifies the main reasons and the true benefits for companies to decentralize the design of their products. It also demonstrates the limitations of this approach by listing the relevant issues and problems created by the decentralization of decisions. Based on these observations, a game-theoretic approach to decentralized design is proposed to model the decisions made during the design process. The dynamics are modeled using mathematical formulations inspired from control theory. Building upon this formalism, the issue of convergence in decentralized design is analyzed: the equilibrium points of the design space are identified and convergent and divergent patterns are recognized. This rigorous investigation of the design process provides motivation and support for proposing new approaches to decentralized design problems. Two methods are developed, which aim at improving the design process in two ways: decreasing the product development time, and increasing the optimality of the final design. The frame of these methods are inspired by eigenstructure decomposition and set-based design, respectively. The value of the research detailed within this dissertation is in the proposed methods which are built upon the sound mathematical formalism developed. The contribution of this work is two fold: rigorous investigation of the design process, and practical support to decision-making in decentralized environments.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
The 1/ N Expansion of Tensor Models Beyond Perturbation Theory
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2014-09-01
We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
Improved mathematical and computational tools for modeling photon propagation in tissue
NASA Astrophysics Data System (ADS)
Calabro, Katherine Weaver
Light interacts with biological tissue through two predominant mechanisms: scattering and absorption, which are sensitive to the size and density of cellular organelles, and to biochemical composition (ex. hemoglobin), respectively. During the progression of disease, tissues undergo a predictable set of changes in cell morphology and vascularization, which directly affect their scattering and absorption properties. Hence, quantification of these optical property differences can be used to identify the physiological biomarkers of disease with interest often focused on cancer. Diffuse reflectance spectroscopy is a diagnostic tool, wherein broadband visible light is transmitted through a fiber optic probe into a turbid medium, and after propagating through the sample, a fraction of the light is collected at the surface as reflectance. The measured reflectance spectrum can be analyzed with appropriate mathematical models to extract the optical properties of the tissue, and from these, a set of physiological properties. A number of models have been developed for this purpose using a variety of approaches -- from diffusion theory, to computational simulations, and empirical observations. However, these models are generally limited to narrow ranges of tissue and probe geometries. In this thesis, reflectance models were developed for a much wider range of measurement parameters, and influences such as the scattering phase function and probe design were investigated rigorously for the first time. The results provide a comprehensive understanding of the factors that influence reflectance, with novel insights that, in some cases, challenge current assumptions in the field. An improved Monte Carlo simulation program, designed to run on a graphics processing unit (GPU), was built to simulate the data used in the development of the reflectance models. Rigorous error analysis was performed to identify how inaccuracies in modeling assumptions can be expected to affect the accuracy of extracted optical property values from experimentally-acquired reflectance spectra. From this analysis, probe geometries that offer the best robustness against error in estimation of physiological properties from tissue, are presented. Finally, several in vivo studies demonstrating the use of reflectance spectroscopy for both research and clinical applications are presented.
Intelligent control of a planning system for astronaut training.
Ortiz, J; Chen, G
1999-07-01
This work intends to design, analyze and solve, from the systems control perspective, a complex, dynamic, and multiconstrained planning system for generating training plans for crew members of the NASA-led International Space Station. Various intelligent planning systems have been developed within the framework of artificial intelligence. These planning systems generally lack a rigorous mathematical formalism to allow a reliable and flexible methodology for their design, modeling, and performance analysis in a dynamical, time-critical, and multiconstrained environment. Formulating the planning problem in the domain of discrete-event systems under a unified framework such that it can be modeled, designed, and analyzed as a control system will provide a self-contained theory for such planning systems. This will also provide a means to certify various planning systems for operations in the dynamical and complex environments in space. The work presented here completes the design, development, and analysis of an intricate, large-scale, and representative mathematical formulation for intelligent control of a real planning system for Space Station crew training. This planning system has been tested and used at NASA-Johnson Space Center.
NASA Astrophysics Data System (ADS)
Ogungbemi, Kayode; Han, Xianming; Blosser, Micheal; Misra, Prabhakar; LASER Spectroscopy Group Collaboration
2014-03-01
Optogalvanic transitions have been recorded and fitted for 1s5 - 2p7\\ (621.7 nm), 1s5 - 2p8 (633.4 nm) and 1s5 - 2p9 (640.2 nm) transitions of neon in a Fe-Ne hollow cathode plasma discharge as a function of current (2-19 mA) and time evolution (0-50 microsec). The optogalvanic waveforms have been fitted to a Monte carlo mathematical model. The variation in the excited population of neon is governed by the rate of collision of the atoms involving the common metastable state (1s5) for the three transitions investigated. The concomitant changes in amplitudes and intensities of the optogalvanic signal waveforms associated with these transitions have been studied rigorously and the fitted parameters obtained using the Monte Carlo algorithm to help better understand the physics of the hollow cathode discharge. Thanks to Laser Spectroscopy group in Physics and Astronomy Dept. Howard University Washington DC.
Understanding the persistence of measles: reconciling theory, simulation and observation.
Keeling, Matt J; Grenfell, Bryan T
2002-01-01
Ever since the pattern of localized extinction associated with measles was discovered by Bartlett in 1957, many models have been developed in an attempt to reproduce this phenomenon. Recently, the use of constant infectious and incubation periods, rather than the more convenient exponential forms, has been presented as a simple means of obtaining realistic persistence levels. However, this result appears at odds with rigorous mathematical theory; here we reconcile these differences. Using a deterministic approach, we parameterize a variety of models to fit the observed biennial attractor, thus determining the level of seasonality by the choice of model. We can then compare fairly the persistence of the stochastic versions of these models, using the 'best-fit' parameters. Finally, we consider the differences between the observed fade-out pattern and the more theoretically appealing 'first passage time'. PMID:11886620
Quantization of the nonlinear sigma model revisited
NASA Astrophysics Data System (ADS)
Nguyen, Timothy
2016-08-01
We revisit the subject of perturbatively quantizing the nonlinear sigma model in two dimensions from a rigorous, mathematical point of view. Our main contribution is to make precise the cohomological problem of eliminating potential anomalies that may arise when trying to preserve symmetries under quantization. The symmetries we consider are twofold: (i) diffeomorphism covariance for a general target manifold; (ii) a transitive group of isometries when the target manifold is a homogeneous space. We show that there are no anomalies in case (i) and that (ii) is also anomaly-free under additional assumptions on the target homogeneous space, in agreement with the work of Friedan. We carry out some explicit computations for the O(N)-model. Finally, we show how a suitable notion of the renormalization group establishes the Ricci flow as the one loop renormalization group flow of the nonlinear sigma model.
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1985-01-01
A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.
Self-consistent radiation-based simulation of electric arcs: II. Application to gas circuit breakers
NASA Astrophysics Data System (ADS)
Iordanidis, A. A.; Franck, C. M.
2008-07-01
An accurate and robust method for radiative heat transfer simulation for arc applications was presented in the previous paper (part I). In this paper a self-consistent mathematical model based on computational fluid dynamics and a rigorous radiative heat transfer model is described. The model is applied to simulate switching arcs in high voltage gas circuit breakers. The accuracy of the model is proven by comparison with experimental data for all arc modes. The ablation-controlled arc model is used to simulate high current PTFE arcs burning in cylindrical tubes. Model accuracy for the lower current arcs is evaluated using experimental data on the axially blown SF6 arc in steady state and arc resistance measurements close to current zero. The complete switching process with the arc going through all three phases is also simulated and compared with the experimental data from an industrial circuit breaker switching test.
Uncertainty and variability in computational and mathematical models of cardiac physiology.
Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H
2016-12-01
Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Symmetry Properties of Potentiometric Titration Curves.
ERIC Educational Resources Information Center
Macca, Carlo; Bombi, G. Giorgio
1983-01-01
Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)
Mokhtari, Amir; Oryang, David; Chen, Yuhuan; Pouillot, Regis; Van Doren, Jane
2018-01-08
We developed a probabilistic mathematical model for the postharvest processing of leafy greens focusing on Escherichia coli O157:H7 contamination of fresh-cut romaine lettuce as the case study. Our model can (i) support the investigation of cross-contamination scenarios, and (ii) evaluate and compare different risk mitigation options. We used an agent-based modeling framework to predict the pathogen prevalence and levels in bags of fresh-cut lettuce and quantify spread of E. coli O157:H7 from contaminated lettuce to surface areas of processing equipment. Using an unbalanced factorial design, we were able to propagate combinations of random values assigned to model inputs through different processing steps and ranked statistically significant inputs with respect to their impacts on selected model outputs. Results indicated that whether contamination originated on incoming lettuce heads or on the surface areas of processing equipment, pathogen prevalence among bags of fresh-cut lettuce and batches was most significantly impacted by the level of free chlorine in the flume tank and frequency of replacing the wash water inside the tank. Pathogen levels in bags of fresh-cut lettuce were most significantly influenced by the initial levels of contamination on incoming lettuce heads or surface areas of processing equipment. The influence of surface contamination on pathogen prevalence or levels in fresh-cut bags depended on the location of that surface relative to the flume tank. This study demonstrates that developing a flexible yet mathematically rigorous modeling tool, a "virtual laboratory," can provide valuable insights into the effectiveness of individual and combined risk mitigation options. © 2018 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Boundary-layer effects in composite laminates: Free-edge stress singularities, part 6
NASA Technical Reports Server (NTRS)
Wanag, S. S.; Choi, I.
1981-01-01
A rigorous mathematical model was obtained for the boundary-layer free-edge stress singularity in angleplied and crossplied fiber composite laminates. The solution was obtained using a method consisting of complex-variable stress function potentials and eigenfunction expansions. The required order of the boundary-layer stress singularity is determined by solving the transcendental characteristic equation obtained from the homogeneous solution of the partial differential equations. Numerical results obtained show that the boundary-layer stress singularity depends only upon material elastic constants and fiber orientation of the adjacent plies. For angleplied and crossplied laminates the order of the singularity is weak in general.
The new camera calibration system at the US Geological Survey
Light, D.L.
1992-01-01
Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author
Technical, analytical and computer support
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.
On the Wind Generation of Water Waves
NASA Astrophysics Data System (ADS)
Bühler, Oliver; Shatah, Jalal; Walsh, Samuel; Zeng, Chongchun
2016-11-01
In this work, we consider the mathematical theory of wind generated water waves. This entails determining the stability properties of the family of laminar flow solutions to the two-phase interface Euler equation. We present a rigorous derivation of the linearized evolution equations about an arbitrary steady solution, and, using this, we give a complete proof of the instability criterion of M iles [16]. Our analysis is valid even in the presence of surface tension and a vortex sheet (discontinuity in the tangential velocity across the air-sea interface). We are thus able to give a unified equation connecting the Kelvin-Helmholtz and quasi-laminar models of wave generation.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
The model of drugs distribution dynamics in biological tissue
NASA Astrophysics Data System (ADS)
Ginevskij, D. A.; Izhevskij, P. V.; Sheino, I. N.
2017-09-01
The dose distribution by Neutron Capture Therapy follows the distribution of 10B in the tissue. The modern models of pharmacokinetics of drugs describe the processes occurring in conditioned "chambers" (blood-organ-tumor), but fail to describe the spatial distribution of the drug in the tumor and in normal tissue. The mathematical model of the spatial distribution dynamics of drugs in the tissue, depending on the concentration of the drug in the blood, was developed. The modeling method is the representation of the biological structure in the form of a randomly inhomogeneous medium in which the 10B distribution occurs. The parameters of the model, which cannot be determined rigorously in the experiment, are taken as the quantities subject to the laws of the unconnected random processes. The estimates of 10B distribution preparations in the tumor and healthy tissue, inside/outside the cells, are obtained.
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Gordon, M. J. C.
2015-01-01
Robin Milner's paper, ‘The use of machines to assist in rigorous proof’, introduces methods for automating mathematical reasoning that are a milestone in the development of computer-assisted theorem proving. His ideas, particularly his theory of tactics, revolutionized the architecture of proof assistants. His methodology for automating rigorous proof soundly, particularly his theory of type polymorphism in programing, led to major contributions to the theory and design of programing languages. His citation for the 1991 ACM A.M. Turing award, the most prestigious award in computer science, credits him with, among other achievements, ‘probably the first theoretically based yet practical tool for machine assisted proof construction’. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750147
NASA Astrophysics Data System (ADS)
Calì, M.; Santarelli, M. G. L.; Leone, P.
Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.
Towards a Unified Theory of Engineering Education
ERIC Educational Resources Information Center
Salcedo Orozco, Oscar H.
2017-01-01
STEM education is an interdisciplinary approach to learning where rigorous academic concepts are coupled with real-world lessons and activities as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling STEM literacy (Tsupros, Kohler and…
Evaluation, Instruction and Policy Making. IIEP Seminar Paper: 9.
ERIC Educational Resources Information Center
Bloom, Benjamin S.
Recently, educational evaluation has attempted to use the precision, objectivity, and mathematical rigor of the psychological measurement field as well as to find ways in which instrumentation and data utilization could more directly be related to educational institutions, educational processes, and educational purposes. The linkages between…
Investigations into phase effects from diffracted Gaussian beams for high-precision interferometry
NASA Astrophysics Data System (ADS)
Lodhia, Deepali
Gravitational wave detectors are a new class of observatories aiming to detect gravitational waves from cosmic sources. All-reflective interferometer configurations have been proposed for future detectors, replacing transmissive optics with diffractive elements, thereby reducing thermal issues associated with power absorption. However, diffraction gratings introduce additional phase noise, creating more stringent conditions for alignment stability, and further investigations are required into all-reflective interferometers. A suitable mathematical framework using Gaussian modes is required for analysing the alignment stability using diffraction gratings. Such a framework was created, whereby small beam displacements are modelled using a modal technique. It was confirmed that the original modal-based model does not contain the phase changes associated with grating displacements. Experimental tests verified that the phase of a diffracted Gaussian beam is independent of the beam shape. Phase effects were further examined using a rigorous time-domain simulation tool. These findings show that the perceived phase difference is based on an intrinsic change of coordinate system within the modal-based model, and that the extra phase can be added manually to the modal expansion. This thesis provides a well-tested and detailed mathematical framework that can be used to develop simulation codes to model more complex layouts of all-reflective interferometers.
Cooperative interactions in dense thermal Rb vapour confined in nm-scale cells
NASA Astrophysics Data System (ADS)
Keaveney, James
Gravitational wave detectors are a new class of observatories aiming to detect gravitational waves from cosmic sources. All-reflective interferometer configurations have been proposed for future detectors, replacing transmissive optics with diffractive elements, thereby reducing thermal issues associated with power absorption. However, diffraction gratings introduce additional phase noise, creating more stringent conditions for alignment stability, and further investigations are required into all-reflective interferometers. A suitable mathematical framework using Gaussian modes is required for analysing the alignment stability using diffraction gratings. Such a framework was created, whereby small beam displacements are modelled using a modal technique. It was confirmed that the original modal-based model does not contain the phase changes associated with grating displacements. Experimental tests verified that the phase of a diffracted Gaussian beam is independent of the beam shape. Phase effects were further examined using a rigorous time-domain simulation tool. These findings show that the perceived phase difference is based on an intrinsic change of coordinate system within the modal-based model, and that the extra phase can be added manually to the modal expansion. This thesis provides a well-tested and detailed mathematical framework that can be used to develop simulation codes to model more complex layouts of all-reflective interferometers.
Mechanism-Based Mathematical Model for Gating of Ionotropic Glutamate Receptors.
Dai, Jian; Wollmuth, Lonnie P; Zhou, Huan-Xiang
2015-08-27
We present a mathematical model for ionotropic glutamate receptors (iGluR's) that is built on mechanistic understanding and yields a number of thermodynamic and kinetic properties of channel gating. iGluR's are ligand-gated ion channels responsible for the vast majority of fast excitatory neurotransmission in the central nervous system. The effects of agonist-induced closure of the ligand-binding domain (LBD) are transmitted to the transmembrane channel (TMC) via interdomain linkers. Our model demonstrates that, relative to full agonists, partial agonists may reduce either the degree of LBD closure or the curvature of the LBD free energy basin, leading to less stabilization of the channel open state and hence lower channel open probability. A rigorous relation is derived between the channel closed-to-open free energy difference and the tension within the linker. Finally, by treating LBD closure and TMC opening as diffusive motions, we obtain gating trajectories that resemble stochastic current traces from single-channel recordings and calculate the rate constants for transitions between the channel open and closed states. Our model can be implemented by molecular dynamics simulations to realistically depict iGluR gating and may guide functional experiments in gaining deeper insight into this essential family of channel proteins.
NASA Astrophysics Data System (ADS)
Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem
2018-05-01
In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.
Solving America's Math Problem
ERIC Educational Resources Information Center
Vigdor, Jacob
2013-01-01
Concern about students' math achievement is nothing new, and debates about the mathematical training of the nation's youth date back a century or more. In the early 20th century, American high-school students were starkly divided, with rigorous math courses restricted to a college-bound elite. At midcentury, the "new math" movement sought,…
A Novel Approach to Physiology Education for Biomedical Engineering Students
ERIC Educational Resources Information Center
DiCecco, J.; Wu, J.; Kuwasawa, K.; Sun, Y.
2007-01-01
It is challenging for biomedical engineering programs to incorporate an indepth study of the systemic interdependence of cells, tissues, and organs into the rigorous mathematical curriculum that is the cornerstone of engineering education. To be sure, many biomedical engineering programs require their students to enroll in anatomy and physiology…
ERIC Educational Resources Information Center
Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.
2011-01-01
The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…
Group Practices: A New Way of Viewing CSCL
ERIC Educational Resources Information Center
Stahl, Gerry
2017-01-01
The analysis of "group practices" can make visible the work of novices learning how to inquire in science or mathematics. These ubiquitous practices are invisibly taken for granted by adults, but can be observed and rigorously studied in adequate traces of online collaborative learning. Such an approach contrasts with traditional…
Exploring in Aeronautics. An Introduction to Aeronautical Sciences.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Cleveland, OH. Lewis Research Center.
This curriculum guide is based on a year of lectures and projects of a contemporary special-interest Explorer program intended to provide career guidance and motivation for promising students interested in aerospace engineering and scientific professions. The adult-oriented program avoids technicality and rigorous mathematics and stresses real…
Virginia's College and Career Readiness Initiative
ERIC Educational Resources Information Center
Virginia Department of Education, 2010
2010-01-01
In 1995, Virginia began a broad educational reform program that resulted in revised, rigorous content standards, the Virginia Standards of Learning (SOL), in the content areas of English, mathematics, science, and history and social science. These grade-by-grade and course-based standards were developed over 14 months with revision teams including…
Math Exchanges: Guiding Young Mathematicians in Small-Group Meetings
ERIC Educational Resources Information Center
Wedekind, Kassia Omohundro
2011-01-01
Traditionally, small-group math instruction has been used as a format for reaching children who struggle to understand. Math coach Kassia Omohundro Wedekind uses small-group instruction as the centerpiece of her math workshop approach, engaging all students in rigorous "math exchanges." The key characteristics of these mathematical conversations…
Zoos, Aquariums, and Expanding Students' Data Literacy
ERIC Educational Resources Information Center
Mokros, Jan; Wright, Tracey
2009-01-01
Zoo and aquarium educators are increasingly providing educationally rigorous programs that connect their animal collections with curriculum standards in mathematics as well as science. Partnering with zoos and aquariums is a powerful way for teachers to provide students with more opportunities to observe, collect, and analyze scientific data. This…
The Markov process admits a consistent steady-state thermodynamic formalism
NASA Astrophysics Data System (ADS)
Peng, Liangrong; Zhu, Yi; Hong, Liu
2018-01-01
The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Benson, Neil; van der Graaf, Piet H; Peletier, Lambertus A
2017-11-15
A key element of the drug discovery process is target selection. Although the topic is subject to much discussion and experimental effort, there are no defined quantitative rules around optimal selection. Often 'rules of thumb', that have not been subject to rigorous exploration, are used. In this paper we explore the 'rule of thumb' notion that the molecule that initiates a pathway signal is the optimal target. Given the multi-factorial and complex nature of this question, we have simplified an example pathway to its logical minimum of two steps and used a mathematical model of this to explore the different options in the context of typical small and large molecule drugs. In this paper, we report the conclusions of our analysis and describe the analysis tool and methods used. These provide a platform to enable a more extensive enquiry into this important topic. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantum correlations and dynamics from classical random fields valued in complex Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
2010-08-15
One of the crucial differences between mathematical models of classical and quantum mechanics (QM) is the use of the tensor product of the state spaces of subsystems as the state space of the corresponding composite system. (To describe an ensemble of classical composite systems, one uses random variables taking values in the Cartesian product of the state spaces of subsystems.) We show that, nevertheless, it is possible to establish a natural correspondence between the classical and the quantum probabilistic descriptions of composite systems. Quantum averages for composite systems (including entangled) can be represented as averages with respect to classical randommore » fields. It is essentially what Albert Einstein dreamed of. QM is represented as classical statistical mechanics with infinite-dimensional phase space. While the mathematical construction is completely rigorous, its physical interpretation is a complicated problem. We present the basic physical interpretation of prequantum classical statistical field theory in Sec. II. However, this is only the first step toward real physical theory.« less
Thermochemical nonequilibrium in atomic hydrogen at elevated temperatures
NASA Technical Reports Server (NTRS)
Scott, R. K.
1972-01-01
A numerical study of the nonequilibrium flow of atomic hydrogen in a cascade arc was performed to obtain insight into the physics of the hydrogen cascade arc. A rigorous mathematical model of the flow problem was formulated, incorporating the important nonequilibrium transport phenomena and atomic processes which occur in atomic hydrogen. Realistic boundary conditions, including consideration of the wall electrostatic sheath phenomenon, were included in the model. The governing equations of the asymptotic region of the cascade arc were obtained by writing conservation of mass and energy equations for the electron subgas, an energy conservation equation for heavy particles and an equation of state. Finite-difference operators for variable grid spacing were applied to the governing equations and the resulting system of strongly coupled, stiff equations were solved numerically by the Newton-Raphson method.
Seismic waves in a self-gravitating planet
NASA Astrophysics Data System (ADS)
Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther
2013-04-01
The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.
Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang
2016-10-03
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
On making cuts for magnetic scalar potentials in multiply connected regions
NASA Astrophysics Data System (ADS)
Kotiuga, P. R.
1987-04-01
The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.
Collisional damping rates for plasma waves
NASA Astrophysics Data System (ADS)
Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.
2016-06-01
The distinction between the plasma dynamics dominated by collisional transport versus collective processes has never been rigorously addressed until recently. A recent paper [P. H. Yoon et al., Phys. Rev. E 93, 033203 (2016)] formulates for the first time, a unified kinetic theory in which collective processes and collisional dynamics are systematically incorporated from first principles. One of the outcomes of such a formalism is the rigorous derivation of collisional damping rates for Langmuir and ion-acoustic waves, which can be contrasted to the heuristic customary approach. However, the results are given only in formal mathematical expressions. The present brief communication numerically evaluates the rigorous collisional damping rates by considering the case of plasma particles with Maxwellian velocity distribution function so as to assess the consequence of the rigorous formalism in a quantitative manner. Comparison with the heuristic ("Spitzer") formula shows that the accurate damping rates are much lower in magnitude than the conventional expression, which implies that the traditional approach over-estimates the importance of attenuation of plasma waves by collisional relaxation process. Such a finding may have a wide applicability ranging from laboratory to space and astrophysical plasmas.
Accuracy and performance of 3D mask models in optical projection lithography
NASA Astrophysics Data System (ADS)
Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar
2011-04-01
Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.
MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.
Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan
2017-01-01
Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.
NASA Astrophysics Data System (ADS)
Ray, Nadja; Rupp, Andreas; Knabner, Peter
2016-04-01
Soil is arguably the most prominent example of a natural porous medium that is composed of a porous matrix and a pore space. Within this framework and in terms of soil's heterogeneity, we first consider transport and fluid flow at the pore scale. From there, we develop a mechanistic model and upscale it mathematically to transfer our model from the small scale to that of the mesoscale (laboratory scale). The mathematical framework of (periodic) homogenization (in principal) rigorously facilitates such processes by exactly computing the effective coefficients/parameters by means of the pore geometry and processes. In our model, various small-scale soil processes may be taken into account: molecular diffusion, convection, drift emerging from electric forces, and homogeneous reactions of chemical species in a solvent. Additionally, our model may consider heterogeneous reactions at the porous matrix, thus altering both the porosity and the matrix. Moreover, our model may additionally address biophysical processes, such as the growth of biofilms and how this affects the shape of the pore space. Both of the latter processes result in an intrinsically variable soil structure in space and time. Upscaling such models under the assumption of a locally periodic setting must be performed meticulously to preserve information regarding the complex coupling of processes in the evolving heterogeneous medium. Generally, a micro-macro model emerges that is then comprised of several levels of couplings: Macroscopic equations that describe the transport and fluid flow at the scale of the porous medium (mesoscale) include averaged time- and space-dependent coefficient functions. These functions may be explicitly computed by means of auxiliary cell problems (microscale). Finally, the pore space in which the cell problems are defined is time- and space dependent and its geometry inherits information from the transport equation's solutions. Numerical computations using mixed finite elements and potentially random initial data, e.g. that of porosity, complement our theoretical results. Our investigations contribute to the theoretical understanding of the link between soil formation and soil functions. This general framework may be applied to various problems in soil science for a range of scales, such as the formation and turnover of microaggregates or soil remediation.
Bayly, Philip V.; Wilson, Kate S.
2014-01-01
The motion of flagella and cilia arises from the coordinated activity of dynein motor protein molecules arrayed along microtubule doublets that span the length of axoneme (the flagellar cytoskeleton). Dynein activity causes relative sliding between the doublets, which generates propulsive bending of the flagellum. The mechanism of dynein coordination remains incompletely understood, although it has been the focus of many studies, both theoretical and experimental. In one leading hypothesis, known as the geometric clutch (GC) model, local dynein activity is thought to be controlled by interdoublet separation. The GC model has been implemented as a numerical simulation in which the behavior of a discrete set of rigid links in viscous fluid, driven by active elements, was approximated using a simplified time-marching scheme. A continuum mechanical model and associated partial differential equations of the GC model have remained lacking. Such equations would provide insight into the underlying biophysics, enable mathematical analysis of the behavior, and facilitate rigorous comparison to other models. In this article, the equations of motion for the flagellum and its doublets are derived from mechanical equilibrium principles and simple constitutive models. These equations are analyzed to reveal mechanisms of wave propagation and instability in the GC model. With parameter values in the range expected for Chlamydomonas flagella, solutions to the fully nonlinear equations closely resemble observed waveforms. These results support the ability of the GC hypothesis to explain dynein coordination in flagella and provide a mathematical foundation for comparison to other leading models. PMID:25296329
Production of Entanglement Entropy by Decoherence
NASA Astrophysics Data System (ADS)
Merkli, M.; Berman, G. P.; Sayre, R. T.; Wang, X.; Nesterov, A. I.
We examine the dynamics of entanglement entropy of all parts in an open system consisting of a two-level dimer interacting with an environment of oscillators. The dimer-environment interaction is almost energy conserving. We find the precise link between decoherence and production of entanglement entropy. We show that not all environment oscillators carry significant entanglement entropy and we identify the oscillator frequency regions which contribute to the production of entanglement entropy. For energy conserving dimer-environment interactions the models are explicitly solvable and our results hold for all dimer-environment coupling strengths. We carry out a mathematically rigorous perturbation theory around the energy conserving situation in the presence of small non-energy conserving interactions.
Test Anxiety and the Curriculum: The Subject Matters.
ERIC Educational Resources Information Center
Everson, Howard T.; And Others
College students' self-reported test anxiety levels in English, mathematics, physical science, and social science were compared to develop empirical support for the claim that students, in general, are more anxious about tests in rigorous academic subjects than in the humanities and to understand the curriculum-related sources of anxiety. It was…
Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory
ERIC Educational Resources Information Center
Andraos, John
2015-01-01
A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…
The Art of Learning: A Guide to Outstanding North Carolina Arts in Education Programs.
ERIC Educational Resources Information Center
Herman, Miriam L.
The Arts in Education programs delineated in this guide complement the rigorous arts curriculum taught by arts specialists in North Carolina schools and enable students to experience the joy of the creative process while reinforcing learning in other curricula: language arts, mathematics, social studies, science, and physical education. Programs…
High Standards Help Struggling Students: New Evidence. Charts You Can Trust
ERIC Educational Resources Information Center
Clark, Constance; Cookson, Peter W., Jr.
2012-01-01
The Common Core State Standards, adopted by 46 states and the District of Columbia, promise to raise achievement in English and mathematics through rigorous standards that promote deeper learning. But while most policymakers, researchers, and educators have embraced these higher standards, some question the fairness of raising the academic bar on…
Improving Mathematical Problem Solving in Grades 4 through 8. IES Practice Guide. NCEE 2012-4055
ERIC Educational Resources Information Center
Woodward, John; Beckmann, Sybilla; Driscoll, Mark; Franke, Megan; Herzig, Patricia; Jitendra, Asha; Koedinger, Kenneth R.; Ogbuehi, Philip
2012-01-01
The Institute of Education Sciences (IES) publishes practice guides in education to bring the best available evidence and expertise to bear on current challenges in education. Authors of practice guides combine their expertise with the findings of rigorous research, when available, to develop specific recommendations for addressing these…
Shaping Social Work Science: What Should Quantitative Researchers Do?
ERIC Educational Resources Information Center
Guo, Shenyang
2015-01-01
Based on a review of economists' debates on mathematical economics, this article discusses a key issue for shaping the science of social work--research methodology. The article describes three important tasks quantitative researchers need to fulfill in order to enhance the scientific rigor of social work research. First, to test theories using…
Louis Guttman's Contributions to Classical Test Theory
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.; Zumbo, Bruno D.; Ross, Donald
2005-01-01
This article focuses on Louis Guttman's contributions to the classical theory of educational and psychological tests, one of the lesser known of his many contributions to quantitative methods in the social sciences. Guttman's work in this field provided a rigorous mathematical basis for ideas that, for many decades after Spearman's initial work,…
ERIC Educational Resources Information Center
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate…
State College- and Career-Ready High School Graduation Requirements. Updated
ERIC Educational Resources Information Center
Achieve, Inc., 2013
2013-01-01
Research by Achieve, ACT, and others suggests that for high school graduates to be prepared for success in a wide range of postsecondary settings, they need to take four years of challenging mathematics--covering Advanced Algebra; Geometry; and data, probability, and statistics content--and four years of rigorous English aligned with college- and…
Mathematics Awareness through Technology, Teamwork, Engagement, and Rigor
ERIC Educational Resources Information Center
James, Laurie
2016-01-01
The purpose of this two-year observational study was to determine if the use of technology and intervention groups affected fourth-grade math scores. Specifically, the desire was to identify the percentage of students who met or exceeded grade-level standards on the state standardized test. This study indicated possible reasons that enhanced…
ERIC Educational Resources Information Center
McEvoy, Suzanne
2012-01-01
With the changing U.S. demographics, higher numbers of diverse, low-income, first-generation students are underprepared for the academic rigors of four-year institutions oftentimes requiring assistance, and remedial and/or developmental coursework in English and mathematics. Without intervention approaches these students are at high risk for…
ERIC Educational Resources Information Center
Ashley, Michael; Cooper, Katelyn M.; Cala, Jacqueline M.; Brownell, Sara E.
2017-01-01
Summer bridge programs are designed to help transition students into the college learning environment. Increasingly, bridge programs are being developed in science, technology, engineering, and mathematics (STEM) disciplines because of the rigorous content and lower student persistence in college STEM compared with other disciplines. However, to…
Visualizing, Rather than Deriving, Russell-Saunders Terms: A Classroom Activity with Quantum Numbers
ERIC Educational Resources Information Center
Coppo, Paolo
2016-01-01
A 1 h classroom activity is presented, aimed at consolidating the concepts of microstates and Russell-Saunders energy terms in transition metal atoms and coordination complexes. The unconventional approach, based on logic and intuition rather than rigorous mathematics, is designed to stimulate discussion and enhance familiarity with quantum…
Teaching the Concept of Breakdown Point in Simple Linear Regression.
ERIC Educational Resources Information Center
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
Marghetis, Tyler; Núñez, Rafael
2013-04-01
The canonical history of mathematics suggests that the late 19th-century "arithmetization" of calculus marked a shift away from spatial-dynamic intuitions, grounding concepts in static, rigorous definitions. Instead, we argue that mathematicians, both historically and currently, rely on dynamic conceptualizations of mathematical concepts like continuity, limits, and functions. In this article, we present two studies of the role of dynamic conceptual systems in expert proof. The first is an analysis of co-speech gesture produced by mathematics graduate students while proving a theorem, which reveals a reliance on dynamic conceptual resources. The second is a cognitive-historical case study of an incident in 19th-century mathematics that suggests a functional role for such dynamism in the reasoning of the renowned mathematician Augustin Cauchy. Taken together, these two studies indicate that essential concepts in calculus that have been defined entirely in abstract, static terms are nevertheless conceptualized dynamically, in both contemporary and historical practice. Copyright © 2013 Cognitive Science Society, Inc.
On the mathematical treatment of the Born-Oppenheimer approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr
2014-05-15
Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common usemore » of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.« less
An Overview of Different Approaches for Battery Lifetime Prediction
NASA Astrophysics Data System (ADS)
Zhang, Peng; Liang, Jun; Zhang, Feng
2017-05-01
With the rapid development of renewable energy and the continuous improvement of the power supply reliability, battery energy storage technology has been wildly used in power system. Battery degradation is a nonnegligible issue when battery energy storage system participates in system design and operation strategies optimization. The health assessment and remaining cycle life estimation of battery gradually become a challenge and research hotspot in many engineering areas. In this paper, the battery capacity falling and internal resistance increase are presented on the basis of chemical reactions inside the battery. The general life prediction models are analysed from several aspects. The characteristics of them as well as their application scenarios are discussed in the survey. In addition, a novel weighted Ah ageing model with the introduction of the Ragone curve is proposed to provide a detailed understanding of the ageing processes. A rigorous proof of the mathematical theory about the proposed model is given in the paper.
Jitendra, Asha K; Petersen-Brown, Shawna; Lein, Amy E; Zaslofsky, Anne F; Kunkel, Amy K; Jung, Pyung-Gang; Egan, Andrea M
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et al. and 10 single case design (SCD) research studies using criteria suggested by Horner et al. and the What Works Clearinghouse. Results indicated that 14 group design studies met the criteria for high-quality or acceptable research, whereas SCD studies did not meet the standards for an evidence-based practice. Based on these findings, strategy instruction priming the mathematics problem structure is considered an evidence-based practice using only group design methodological criteria. Implications for future research and for practice are discussed. © Hammill Institute on Disabilities 2013.
Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Mathematical description of complex chemical kinetics and application to CFD modeling codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Crystal Growth and Fluid Mechanics Problems in Directional Solidification
NASA Technical Reports Server (NTRS)
Tanveer, Saleh A.; Baker, Gregory R.; Foster, Michael R.
2001-01-01
Our work in directional solidification has been in the following areas: (1) Dynamics of dendrites including rigorous mathematical analysis of the resulting equations; (2) Examination of the near-structurally unstable features of the mathematically related Hele-Shaw dynamics; (3) Numerical studies of steady temperature distribution in a vertical Bridgman device; (4) Numerical study of transient effects in a vertical Bridgman device; (5) Asymptotic treatment of quasi-steady operation of a vertical Bridgman furnace for large Rayleigh numbers and small Biot number in 3D; and (6) Understanding of Mullins-Sererka transition in a Bridgman device with fluid dynamics is accounted for.
Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R
2017-10-23
The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling rigorous comparisons among the effects of different compounds and experimental approaches. In all experimental data fitted, the calculated parameters were always statistically significant, the equations proved to be consistent and the correlation coefficient of determination was, in most of the cases, higher than 0.98.
Kinetics of biochemical sensing by single cells and populations of cells
NASA Astrophysics Data System (ADS)
Saakian, David B.
2017-10-01
We investigate the collective stationary sensing using N communicative cells, which involves surface receptors, diffusive signaling molecules, and cell-cell communication messengers. We restrict the scenarios to the signal-to-noise ratios (SNRs) for both strong communication and extrinsic noise only. We modified a previous model [Bialek and Setayeshgar, Proc. Natl. Acad. Sci. USA 102, 10040 (2005), 10.1073/pnas.0504321102] to eliminate the singularities in the fluctuation correlations by considering a uniform receptor distribution over the surface of each cell with a finite radius a . The modified model enables a simple and rigorous mathematical treatment of the collective sensing phenomenon. We then derive the scaling of the SNR for both juxtacrine and autocrine cases in all dimensions. For the optimal locations of the cells in the autocrine case, we find identical scaling for both two and three dimensions.
Adjoint equations and analysis of complex systems: Application to virus infection modelling
NASA Astrophysics Data System (ADS)
Marchuk, G. I.; Shutyaev, V.; Bocharov, G.
2005-12-01
Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H; O'Donnell, Cian; Sejnowski, Terrence J; O'Leary, Timothy
2016-01-01
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’ (Doyle and Kiebler, 2011). Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimates of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. These findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons. DOI: http://dx.doi.org/10.7554/eLife.20556.001 PMID:28034367
ERIC Educational Resources Information Center
Roschelle, Jeremy; Murphy, Robert; Feng, Mingyu; Bakia, Marianne
2017-01-01
In a rigorous evaluation of ASSISTments as an online homework support conducted in the state of Maine, SRI International reported that "the intervention significantly increased student scores on an end-of-the-year standardized mathematics assessment as compared with a control group that continued with existing homework practices."…
A Curricular-Sampling Approach to Progress Monitoring: Mathematics Concepts and Applications
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Fuchs, Douglas; Zumeta, Rebecca O.
2008-01-01
Progress monitoring is an important component of effective instructional practice. Curriculum-based measurement (CBM) is a form of progress monitoring that has been the focus of rigorous research. Two approaches for formulating CBM systems exist. The first is to assess performance regularly on a task that serves as a global indicator of competence…
ERIC Educational Resources Information Center
HARDWICK, ARTHUR LEE
AT THIS WORKSHOP OF INDUSTRIAL REPRESENTATIVE AND TECHNICAL EDUCATORS, A TECHNICIAN WAS DEFINED AS ONE WITH BROAD-BASED MATHEMATICAL AND SCIENTIFIC TRAINING AND WITH COMPETENCE TO SUPPORT PROFESSIONAL SYSTEMS, ENGINEERING, AND OTHER SCIENTIFIC PERSONNEL. HE SHOULD RECEIVE A RIGOROUS, 2-YEAR, POST SECONDARY EDUCATION ESPECIALLY DESIGNED FOR HIS…
What Can Graph Theory Tell Us about Word Learning and Lexical Retrieval?
ERIC Educational Resources Information Center
Vitevitch, Michael S.
2008-01-01
Purpose: Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of…
Slow off the Mark: Elementary School Teachers and the Crisis in STEM Education
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
Prospective teachers can typically obtain a license to teach elementary school without taking a rigorous college-level STEM class such as calculus, statistics, or chemistry, and without demonstrating a solid grasp of mathematics knowledge, scientific knowledge, or the nature of scientific inquiry. This is not a recipe for ensuring students have…
ERIC Educational Resources Information Center
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
High School Graduation Requirements in a Time of College and Career Readiness. CSAI Report
ERIC Educational Resources Information Center
Center on Standards and Assessments Implementation, 2016
2016-01-01
Ensuring that students graduate high school prepared for college and careers has become a national priority in the last decade. To support this goal, states have adopted rigorous college and career readiness (CCR) standards in English language arts (ELA) and mathematics. Additionally, states have begun to require students to pass assessments, in…
Using Teacher Evaluation Reform and Professional Development to Support Common Core Assessments
ERIC Educational Resources Information Center
Youngs, Peter
2013-01-01
The Common Core State Standards Initiative, in its aim to align diverse state curricula and improve educational outcomes, calls for K-12 teachers in the United States to engage all students in mathematical problem solving along with reading and writing complex text through the use of rigorous academic content. Until recently, most teacher…
From virtual clustering analysis to self-consistent clustering analysis: a mathematical study
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam
2018-03-01
In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.
NASA Astrophysics Data System (ADS)
Lee, H.
2016-12-01
Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.
Representations of the Extended Poincare Superalgebras in Four Dimensions
NASA Astrophysics Data System (ADS)
Griffis, John D.
Eugene Wigner used the Poincare group to induce representations from the fundamental internal space-time symmetries of (special) relativistic quantum particles. Wigner's students spent considerable amount of time translating passages of this paper into more detailed and accessible papers and books. In 1975, R. Haag et al. investigated the possible extensions of the symmetries of relativistic quantum particles. They showed that the only consistent (super)symmetric extensions to the standard model of physics are obtained by using super charges to generate the odd part of a Lie superalgebra whose even part is generated by the Poincare group; this theory has become known as supersymmetry. In this paper, R. Haag et al. used a notation called supermultiplets to give the dimension of a representation and its multiplicity; this notation is described mathematically in chapter 5 of this thesis. By 1980 S. Ferrara et al. began classifying the representations of these algebras for dimensions greater than four, and in 1986 Strathdee published considerable work listing some representations for the Poincare superalgebra in any finite dimension. This work has been continued to date. We found the work of S. Ferrara et al. to be essential to our understanding extended supersymmetries. However, this paper was written using imprecise language meant for physicists, so it was far from trivial to understand the mathematical interpretation of this work. In this thesis, we provide a "translation" of the previous results (along with some other literature on the Extended Poincare Superalgebras) into a rigorous mathematical setting, which makes the subject more accessible to a larger audience. Having a mathematical model allows us to give explicit results and detailed proofs. Further, this model allows us to see beyond just the physical interpretation and it allows investigation by a purely mathematically adept audience. Our work was motivated by a paper written in 2012 by M. Chaichian et al, which classified all of the unitary, irreducible representations of the extended Poincare superalgebra in three dimensions. We consider only the four dimensional case, which is of interest to physicists working on quantum supergravity models without cosmological constant, and we provide explicit branching rules for the invariant subgroups corresponding to the most physically relevant symmetries of the irreducible representations of the Extended Poincare Superalgebra in four dimensions. However, it is possible to further generalize this work into any finite dimension. Such work would classify all possible finitely extended supersymmetric models.
Stochastic Geometry and Quantum Gravity: Some Rigorous Results
NASA Astrophysics Data System (ADS)
Zessin, H.
The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.
Improving the ideal and human observer consistency: a demonstration of principles
NASA Astrophysics Data System (ADS)
He, Xin
2017-03-01
In addition to being rigorous and realistic, the usefulness of the ideal observer computational tools may also depend on whether they serve the empirical purpose for which they are created, e.g. to identify desirable imaging systems to be used by human observers. In SPIE 10136-35, I have shown that the ideal and the human observers do not necessarily prefer the same system as the optimal or better one due to their different objectives in both hardware and software optimization. In this work, I attempt to identify a necessary but insufficient condition under which the human and the ideal observer may rank systems consistently. If corroborated, such a condition allows a numerical test on the ideal/human consistency without routine human observer studies. I reproduced data from Abbey et al. JOSA 2001 to verify the proposed condition (i.e., not a rigorous falsification study due to the lack of specificity in the proposed conjecture. A roadmap for more falsifiable conditions is proposed). Via this work, I would like to emphasize the reality of practical decision making in addition to the realism in mathematical modeling. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA.)
Bondarenko, Vladimir E; Cymbalyuk, Gennady S; Patel, Girish; Deweerth, Stephen P; Calabrese, Ronald L
2004-12-01
Oscillatory activity in the central nervous system is associated with various functions, like motor control, memory formation, binding, and attention. Quasiperiodic oscillations are rarely discussed in the neurophysiological literature yet they may play a role in the nervous system both during normal function and disease. Here we use a physical system and a model to explore scenarios for how quasiperiodic oscillations might arise in neuronal networks. An oscillatory system of two mutually inhibitory neuronal units is a ubiquitous network module found in nervous systems and is called a half-center oscillator. Previously we created a half-center oscillator of two identical oscillatory silicon (analog Very Large Scale Integration) neurons and developed a mathematical model describing its dynamics. In the mathematical model, we have shown that an in-phase limit cycle becomes unstable through a subcritical torus bifurcation. However, the existence of this torus bifurcation in experimental silicon two-neuron system was not rigorously demonstrated or investigated. Here we demonstrate the torus predicted by the model for the silicon implementation of a half-center oscillator using complex time series analysis, including bifurcation diagrams, mapping techniques, correlation functions, amplitude spectra, and correlation dimensions, and we investigate how the properties of the quasiperiodic oscillations depend on the strengths of coupling between the silicon neurons. The potential advantages and disadvantages of quasiperiodic oscillations (torus) for biological neural systems and artificial neural networks are discussed.
Designing Studies to Test Causal Questions About Early Math: The Development of Making Pre-K Count.
Mattera, Shira K; Morris, Pamela A; Jacob, Robin; Maier, Michelle; Rojas, Natalia
2017-01-01
A growing literature has demonstrated that early math skills are associated with later outcomes for children. This research has generated interest in improving children's early math competencies as a pathway to improved outcomes for children in elementary school. The Making Pre-K Count study was designed to test the effects of an early math intervention for preschoolers. Its design was unique in that, in addition to causally testing the effects of early math skills, it also allowed for the examination of a number of additional questions about scale-up, the influence of contextual factors and the counterfactual environment, the mechanism of long-term fade-out, and the role of measurement in early childhood intervention findings. This chapter outlines some of the design considerations and decisions put in place to create a rigorous test of the causal effects of early math skills that is also able to answer these questions in early childhood mathematics and intervention. The study serves as a potential model for how to advance science in the fields of preschool intervention and early mathematics. © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.
1977-01-01
The usefulness of vee-trough concentrators in improving the efficiency and reducing the cost of collectors assembled from evacuated tube receivers was studied in the vee-trough/vacuum tube collector (VTVTC) project. The VTVTC was analyzed rigorously and various mathematical models were developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized FEP Teflon. Tests were run at temperatures ranging from 95 to 180 C. Vee-trough collector efficiencies of 35 to 40% were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Predicted daily useful heat collection and efficiency values are presented for a year's duration of operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with a complete economic evaluation.
Deformation and instability of underthrusting lithospheric plates
NASA Technical Reports Server (NTRS)
Liu, H.
1972-01-01
Models of the underthrusting lithosphere are constructed for the calculation of displacement and deflection. First, a mathematical theory is developed that rigorously demonstrates the elastic instability in the decending lithosphere. The theory states that lithospheric thrust beneath island arcs becomes unstable and suffers deflection as the compression increases. Thus, in the neighborhood of the edges where the lithospheric plate plunges into the asthenosphere and mesosphere its shape will be contorted. Next, the lateral displacement is calculated, and it is shown that, before contortion, the plate will thicken and contract at different positions with the variation in thickness following a parabolic profile. Finally, the depth distribution of the intermediate and deep focus earthquakes is explained in terms of plate buckling and contortion.
Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely
2015-12-01
We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.
A stepped pressure profile model for internal transport barriers
NASA Astrophysics Data System (ADS)
Hole, Matthew; Hudson, Stuart; Dewar, Robert
2007-11-01
B ∇x et al We develop a multiple interface variational model, comprising multiple Taylor-relaxed plasma regions separated by ideal MHD barriers. The magnetic field in each region is Beltrami, = μ, and the pressure constant. Between these regions the pressure, field strength, and rotational transform may have step changes at the ideal barrier. A principle motivation is the development of a mathematically rigorous ideal MHD model to describe intrinsically 3D equilibria, with nonzero internal pressure, using robust KAM surfaces as the barriers. As each region is locally relaxed however, such a model may also yield reasons for existence of internal transport barriers (ITBs). Focusing on the latter, we build on Hole Nuc. Fus. 47, pp746-753, 2007, which recently studied the stability of a two-interface periodic-cylinder configuration. In this work, we perform a stability scan over pressure and for a two-interface configuration with no jump in , and compare the characteristics of stable equilibria to those of ITB's.
Change rates and prevalence of a dichotomous variable: simulations and applications.
Brinks, Ralph; Landwehr, Sandra
2015-01-01
A common modelling approach in public health and epidemiology divides the population under study into compartments containing persons that share the same status. Here we consider a three-state model with the compartments: A, B and Dead. States A and B may be the states of any dichotomous variable, for example, Healthy and Ill, respectively. The transitions between the states are described by change rates, which depend on calendar time and on age. So far, a rigorous mathematical calculation of the prevalence of property B has been difficult, which has limited the use of the model in epidemiology and public health. We develop a partial differential equation (PDE) that simplifies the use of the three-state model. To demonstrate the validity of the PDE, it is applied to two simulation studies, one about a hypothetical chronic disease and one about dementia in Germany. In two further applications, the PDE may provide insights into smoking behaviour of males in Germany and the knowledge about the ovulatory cycle in Egyptian women.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V.
2009-12-01
Bush Physics for the 21st Century brings physics that is culturally connected, engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is currently delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Practical, culturally relevant kinetic examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiments are delivered to the students with the exercises completed on-line. In addition, students are teamed and required to perform a much more involved experimental study with the results presented by teams at the conclusion of the course. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. Greek symbols are traditionally used in equations, however, to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the concepts we have introduced Inuktitut symbols to complement the traditional Greek symbols. Results and observations from the first two pilot semesters (spring 2008 and 2009) will be presented.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V. K.
2010-12-01
Bush Physics for the 21st Century brings physics that is engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska where the opportunity to take a physics course has been nearly nonexistent. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Kinetic, practical and culturally relevant place-based examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiment kits are shipped to the students. In addition students conduct a Collaborative Research Experiment where they coordinate times of sun angle measurements with teams in other villages to determine their latitude and longitude as well as an estimate of the circumference of the earth. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. We introduce Inuktitut symbols to complement the traditional Greek symbols in equations to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the physical concepts. Results and observations from the first three pilot semesters (spring 2008, 2009 and 2010) will be presented.
A Mathematical Account of the NEGF Formalism
NASA Astrophysics Data System (ADS)
Cornean, Horia D.; Moldoveanu, Valeriu; Pillet, Claude-Alain
2018-02-01
The main goal of this paper is to put on solid mathematical grounds the so-called Non-Equilibrium Green's Function (NEGF) transport formalism for open systems. In particular, we derive the Jauho-Meir-Wingreen formula for the time-dependent current through an interacting sample coupled to non-interacting leads. Our proof is non-perturbative and uses neither complex-time Keldysh contours, nor Langreth rules of 'analytic continuation'. We also discuss other technical identities (Langreth, Keldysh) involving various many body Green's functions. Finally, we study the Dyson equation for the advanced/retarded interacting Green's function and we rigorously construct its (irreducible) self-energy, using the theory of Volterra operators.
NASA Astrophysics Data System (ADS)
Riendeau, Diane
2012-09-01
To date, this column has presented videos to show in class, Don Mathieson from Tulsa Community College suggested that YouTube could be used in another fashion. In Don's experience, his students are not always prepared for the mathematic rigor of his course. Even at the high school level, math can be a barrier for physics students. Walid Shihabi, a colleague of Don's, decided to compile a list of YouTube videos that his students could watch to relearn basic mathematics. I thought this sounded like a fantastic idea and a great service to the students. Walid graciously agreed to share his list and I have reproduced a large portion of it below.
Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics
NASA Astrophysics Data System (ADS)
Woods, Mischa
2013-03-01
This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation
ERIC Educational Resources Information Center
Eisenhart, Margaret; Weis, Lois; Allen, Carrie D.; Cipollone, Kristin; Stich, Amy; Dominguez, Rachel
2015-01-01
In response to numerous calls for more rigorous STEM (science, technology, engineering, and mathematics) education to improve US competitiveness and the job prospects of next-generation workers, especially those from low-income and minority groups, a growing number of schools emphasizing STEM have been established in the US over the past decade.…
ERIC Educational Resources Information Center
van der Scheer, Emmelien A.; Visscher, Adrie J.
2018-01-01
Data-based decision making (DBDM) is an important element of educational policy in many countries, as it is assumed that student achievement will improve if teachers worked in a data-based way. However, studies that evaluate rigorously the effects of DBDM on student achievement are scarce. In this study, the effects of an intensive…
ERIC Educational Resources Information Center
Randel, Bruce; Beesley, Andrea D.; Apthorp, Helen; Clark, Tedra F.; Wang, Xin; Cicchinelli, Louis F.; Williams, Jean M.
2011-01-01
This study was conducted by the Central Region Educational Laboratory (REL Central) administered by Mid-continent Research for Education and Learning to provide educators and policymakers with rigorous evidence about the potential of Classroom Assessment for Student Learning (CASL) to improve student achievement. CASL is a widely used professional…
How PARCC's False Rigor Stunts the Academic Growth of All Students. White Paper No. 135
ERIC Educational Resources Information Center
McQuillan, Mark; Phelps, Richard P.; Stotsky, Sandra
2015-01-01
In July 2010, the Massachusetts Board of Elementary and Secondary Education (BESE) voted to adopt Common Core's standards in English language arts (ELA) and mathematics in place of the state's own standards in these two subjects. The vote was based largely on recommendations by Commissioner of Education Mitchell Chester and then Secretary of…
ERIC Educational Resources Information Center
Courtade, Ginevra R.; Shipman, Stacy D.; Williams, Rachel
2017-01-01
SPLASH is a 3-year professional development program designed to work with classroom teachers of students with moderate and severe disabilities. The program targets new teachers and employs methods aimed at supporting rural classrooms. The training content focuses on evidence-based practices in English language arts, mathematics, and science, as…
ERIC Educational Resources Information Center
Stoneberg, Bert D.
2015-01-01
The National Center of Education Statistics conducted a mapping study that equated the percentage proficient or above on each state's NCLB reading and mathematics tests in grades 4 and 8 to the NAEP scale. Each "NAEP equivalent score" was labeled according to NAEP's achievement levels and used to compare state proficiency standards and…
ERIC Educational Resources Information Center
Amador-Lankster, Clara
2018-01-01
The purpose of this article is to discuss a Fulbright Evaluation Framework and to analyze findings resulting from implementation of two contextualized measures designed as LEARNING BY DOING in response to achievement expectations from the National Education Ministry in Colombia in three areas. The goal of the Fulbright funded project was to…
Mathematics Education and the Objectivist Programme in HPS
NASA Astrophysics Data System (ADS)
Glas, Eduard
2013-06-01
Using history of mathematics for studying concepts, methods, problems and other internal features of the discipline may give rise to a certain tension between descriptive adequacy and educational demands. Other than historians, educators are concerned with mathematics as a normatively defined discipline. Teaching cannot but be based on a pre-understanding of what mathematics `is' or, in other words, on a normative (methodological, philosophical) view of the identity or nature of the discipline. Educators are primarily concerned with developments at the level of objective mathematical knowledge, that is: with the relations between successive theories, problems and proposed solutions—relations which are independent of whatever has been the role of personal or collective beliefs, convictions, traditions and other historical circumstances. Though not exactly `historical' in the usual sense, I contend that this `objectivist' approach does represent one among other entirely legitimate and valuable approaches to the historical development of mathematics. Its retrospective importance to current practitioners and students is illustrated by a reconstruction of the development of Eudoxus's theory of proportionality in response to the problem of irrationality, and the way in which Dedekind some two millennia later almost literally used this ancient theory for the rigorous introduction of irrational numbers and hence of the real number continuum.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
n-D shape/texture optimal synthetic description and modeling by GEOGINE
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco F.
2004-12-01
GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.
Voit, Eberhard O
2009-01-01
Modern advances in molecular biology have produced enormous amounts of data characterizing physiological and disease states in cells and organisms. While bioinformatics has facilitated the organizing and mining of these data, it is the task of systems biology to merge the available information into dynamic, explanatory and predictive models. This article takes a step into this direction. It proposes a conceptual approach toward formalizing health and disease and illustrates it in the context of inflammation and preconditioning. Instead of defining health and disease states, the emphasis is on simplexes in a high-dimensional biomarker space. These simplexes are bounded by physiological constraints and permit the quantitative characterization of personalized health trajectories, health risk profiles that change with age, and the efficacy of different treatment options. The article mainly focuses on concepts but also briefly describes how the proposed concepts might be formulated rigorously within a mathematical framework.
Overarching framework for data-based modelling
NASA Astrophysics Data System (ADS)
Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco
2014-02-01
One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.
Corruption: Taking into account the psychological mimicry of officials
NASA Astrophysics Data System (ADS)
Kolesin, Igor; Malafeyev, Oleg; Andreeva, Mariia; Ivanukovich, Georgiy
2017-07-01
A mathematical model of corruption with regard to psychological mimicry in the administrative apparatus with three forms of corruption is constructed. It is assumed that the change of officials forms of corruption is due to situational factors, and anti-corruption laws imply the change of the dominant form. Form's changing is modeled by the system of four differential equations (including groups of corrupt officials), describing the number of groups. The speed of the transition from group to group is expressed through the frequency of meetings. The controlling influence is expressed through the force of anticorruption laws. Two cases are discussed: strictly constant and variable (depending on the scope of one or another form). The equilibrium states that allow to specify the dominant form and investigate its stability, depending on the parameters of the psychological mimicry and rigor of anti-corruption laws are found and discussed.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
A Rigorous Solution for Finite-State Inflow throughout the Flowfield
NASA Astrophysics Data System (ADS)
Fei, Zhongyang
In this research, the Hseih/Duffy model is extended to all three velocity components of inflow across the rotor disk in a mathematically rigorous way so that it can be used to calculate the inflow below the rotor disk plane. This establishes a complete dynamic inflow model for the entire flow field with finite state method. The derivation is for the case of general skewed angle. The cost of the new method is that one needs to compute the co-states of the inflow equations in the upper hemisphere along with the normal states. Numerical comparisons with exact solutions for the z-component of flow in axial and skewed angle flow demonstrate excellent correlation with closed-form solutions. The simulations also illustrate that the model is valid at both the frequency domain and the time domain. Meanwhile, in order to accelerate the convergence, an optimization of even terms is used to minimize the error in the axial component of the induced velocity in the on and on/off disk region. A novel method for calculating associate Legendre function of the second kind is also developed to solve the problem of divergence of Q¯mn (ieta) for large eta with the iterative method. An application of the new model is also conducted to compute inflow in the wake of a rotor with a finite number of blades. The velocities are plotted at different distances from the rotor disk and are compared with the Glauert prediction for axial flow and wake swirl. In the finite-state model, the angular momentum does not jump instantaneously across the disk, but it does transition rapidly across the disk to correct Glauert value.
Proteomics research to discover markers: what can we learn from Netflix?
Ransohoff, David F
2010-02-01
Research in the field of proteomics to discover markers for detection of cancer has produced disappointing results, with few markers gaining US Food and Drug Administration approval, and few claims borne out when subsequently tested in rigorous studies. What is the role of better mathematical or statistical analysis in improving the situation? This article examines whether a recent successful Netflix-sponsored competition using mathematical analysis to develop a prediction model for movie ratings of individual subscribers can serve to improve studies of markers in the field of proteomics. Netflix developed a database of movie preferences of individual subscribers using a longitudinal cohort research design. Groups of researchers then competed to develop better ways to analyze the data. Against this background, the strengths and weaknesses of research design are reviewed, contrasting the Netflix design with that of studies of biomarkers to detect cancer. Such biomarker studies generally have less-strong design, lower numbers of outcomes, and greater difficulty in even just measuring predictors and outcomes, so the fundamental data that will be used in mathematical analysis tend to be much weaker than in other kinds of research. If the fundamental data that will be analyzed are not strong, then better analytic methods have limited use in improving the situation. Recognition of this situation is an important first step toward improving the quality of clinical research about markers to detect cancer.
California and the "Common Core": Will There Be a New Debate about K-12 Standards?
ERIC Educational Resources Information Center
EdSource, 2010
2010-01-01
A growing chorus of state and federal policymakers, large foundations, and business leaders across the country are calling for states to adopt a common, rigorous body of college- and career-ready skills and knowledge in English and mathematics that all K-12 students will be expected to master by the time they graduate. This report looks at the…
ERIC Educational Resources Information Center
Kushman, Jim; Hanita, Makoto; Raphael, Jacqueline
2011-01-01
Students entering high school face many new academic challenges. One of the most important is their ability to read and understand more complex text in literature, mathematics, science, and social studies courses as they navigate through a rigorous high school curriculum. The Regional Educational Laboratory (REL) Northwest conducted a study to…
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...
2010-01-01
Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Modelling malaria control by introduction of larvivorous fish.
Lou, Yijun; Zhao, Xiao-Qiang
2011-10-01
Malaria creates serious health and economic problems which call for integrated management strategies to disrupt interactions among mosquitoes, the parasite and humans. In order to reduce the intensity of malaria transmission, malaria vector control may be implemented to protect individuals against infective mosquito bites. As a sustainable larval control method, the use of larvivorous fish is promoted in some circumstances. To evaluate the potential impacts of this biological control measure on malaria transmission, we propose and investigate a mathematical model describing the linked dynamics between the host-vector interaction and the predator-prey interaction. The model, which consists of five ordinary differential equations, is rigorously analysed via theories and methods of dynamical systems. We derive four biologically plausible and insightful quantities (reproduction numbers) that completely determine the community composition. Our results suggest that the introduction of larvivorous fish can, in principle, have important consequences for malaria dynamics, but also indicate that this would require strong predators on larval mosquitoes. Integrated strategies of malaria control are analysed to demonstrate the biological application of our developed theory.
The sympathy of two pendulum clocks: beyond Huygens' observations.
Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin
2016-03-29
This paper introduces a modern version of the classical Huygens' experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks--ad hoc designed and fabricated--which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit 'sympathetic' motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated.
NASA Astrophysics Data System (ADS)
Lachieze-Rey, Marc
This book delivers a quantitative account of the science of cosmology, designed for a non-specialist audience. The basic principles are outlined using simple maths and physics, while still providing rigorous models of the Universe. It offers an ideal introduction to the key ideas in cosmology, without going into technical details. The approach used is based on the fundamental ideas of general relativity such as the spacetime interval, comoving coordinates, and spacetime curvature. It provides an up-to-date and thoughtful discussion of the big bang, and the crucial questions of structure and galaxy formation. Questions of method and philosophical approaches in cosmology are also briefly discussed. Advanced undergraduates in either physics or mathematics would benefit greatly from use either as a course text or as a supplementary guide to cosmology courses.
From Faddeev-Kulish to LSZ. Towards a non-perturbative description of colliding electrons
NASA Astrophysics Data System (ADS)
Dybalski, Wojciech
2017-12-01
In a low energy approximation of the massless Yukawa theory (Nelson model) we derive a Faddeev-Kulish type formula for the scattering matrix of N electrons and reformulate it in LSZ terms. To this end, we perform a decomposition of the infrared finite Dollard modifier into clouds of real and virtual photons, whose infrared divergencies mutually cancel. We point out that in the original work of Faddeev and Kulish the clouds of real photons are omitted, and consequently their wave-operators are ill-defined on the Fock space of free electrons. To support our observations, we compare our final LSZ expression for N = 1 with a rigorous non-perturbative construction due to Pizzo. While our discussion contains some heuristic steps, they can be formulated as clear-cut mathematical conjectures.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Quantum Sheaf Cohomology on Grassmannians
NASA Astrophysics Data System (ADS)
Guo, Jirui; Lu, Zhentao; Sharpe, Eric
2017-05-01
In this paper we study the quantum sheaf cohomology of Grassmannians with deformations of the tangent bundle. Quantum sheaf cohomology is a (0,2) deformation of the ordinary quantum cohomology ring, realized as the OPE ring in A/2-twisted theories. Quantum sheaf cohomology has previously been computed for abelian gauged linear sigma models (GLSMs); here, we study (0,2) deformations of nonabelian GLSMs, for which previous methods have been intractable. Combined with the classical result, the quantum ring structure is derived from the one-loop effective potential. We also utilize recent advances in supersymmetric localization to compute A/2 correlation functions and check the general result in examples. In this paper we focus on physics derivations and examples; in a companion paper, we will provide a mathematically rigorous derivation of the classical sheaf cohomology ring.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
2000-01-01
This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.
Local models of astrophysical discs
NASA Astrophysics Data System (ADS)
Latter, Henrik N.; Papaloizou, John
2017-12-01
Local models of gaseous accretion discs have been successfully employed for decades to describe an assortment of small-scale phenomena, from instabilities and turbulence, to dust dynamics and planet formation. For the most part, they have been derived in a physically motivated but essentially ad hoc fashion, with some of the mathematical assumptions never made explicit nor checked for consistency. This approach is susceptible to error, and it is easy to derive local models that support spurious instabilities or fail to conserve key quantities. In this paper we present rigorous derivations, based on an asympototic ordering, and formulate a hierarchy of local models (incompressible, Boussinesq and compressible), making clear which is best suited for a particular flow or phenomenon, while spelling out explicitly the assumptions and approximations of each. We also discuss the merits of the anelastic approximation, emphasizing that anelastic systems struggle to conserve energy unless strong restrictions are imposed on the flow. The problems encountered by the anelastic approximation are exacerbated by the disc's differential rotation, but also attend non-rotating systems such as stellar interiors. We conclude with a defence of local models and their continued utility in astrophysical research.
Modeling Pilot State in Next Generation Aircraft Alert Systems
NASA Technical Reports Server (NTRS)
Carlin, Alan S.; Alexander, Amy L.; Schurr, Nathan
2011-01-01
The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit that must convey a large number of potentially complex alerts. Our work focuses on the challenges associated with prioritizing aircraft sensor alerts in a quick and efficient manner, essentially determining when and how to alert the pilot This "alert decision" becomes very difficult in NextGen due to the following challenges: 1) the increasing number of potential hazards, 2) the uncertainty associated with the state of potential hazards as well as pilot slate , and 3) the limited time to make safely-critical decisions. In this paper, we focus on pilot state and present a model for anticipating duration and quality of pilot behavior, for use in a larger system which issues aircraft alerts. We estimate pilot workload, which we model as being dependent on factors including mental effort, task demands. and task performance. We perform a mathematically rigorous analysis of the model and resulting alerting plans. We simulate the model in software and present simulated results with respect to manipulation of the pilot measures.
A Renormalisation Group Method. V. A Single Renormalisation Group Step
NASA Astrophysics Data System (ADS)
Brydges, David C.; Slade, Gordon
2015-05-01
This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.
ERIC Educational Resources Information Center
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo
2012-01-01
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Statistical hydrodynamics and related problems in spaces of probability measures
NASA Astrophysics Data System (ADS)
Dostoglou, Stamatios
2017-11-01
A rigorous theory of statistical solutions of the Navier-Stokes equations, suitable for exploring Kolmogorov's ideas, has been developed by M.I. Vishik and A.V. Fursikov, culminating in their monograph "Mathematical problems of Statistical Hydromechanics." We review some progress made in recent years following this approach, with emphasis on problems concerning the correlation of velocities and corresponding questions in the space of probability measures on Hilbert spaces.
ACM TOMS replicated computational results initiative
Heroux, Michael Allen
2015-06-03
In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.
1979-01-01
The Vee-Trough/Vacuum Tube Collector (VTVTC) aimed to improve the efficiency and reduce the cost of collectors assembled from evacuated tube receivers. The VTVTC was analyzed rigorously and a mathematical model was developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized GEB Teflon. Tests were run at temperatures ranging from 95 to 180 C during the months of April, May, June, July and August 1977. Vee-trough collector efficiencies of 35-40 per cent were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Test data covering a complete day are presented for selected dates throughout the test season. Predicted daily useful heat collection and efficiency values are presented for a year's duration at operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with an economic evaluation.
NASA Astrophysics Data System (ADS)
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
Psychoacoustic entropy theory and its implications for performance practice
NASA Astrophysics Data System (ADS)
Strohman, Gregory J.
This dissertation attempts to motivate, derive and imply potential uses for a generalized perceptual theory of musical harmony called psychoacoustic entropy theory. This theory treats the human auditory system as a physical system which takes acoustic measurements. As a result, the human auditory system is subject to all the appropriate uncertainties and limitations of other physical measurement systems. This is the theoretic basis for defining psychoacoustic entropy. Psychoacoustic entropy is a numerical quantity which indexes the degree to which the human auditory system perceives instantaneous disorder within a sound pressure wave. Chapter one explains the importance of harmonic analysis as a tool for performance practice. It also outlines the critical limitations for many of the most influential historical approaches to modeling harmonic stability, particularly when compared to available scientific research in psychoacoustics. Rather than analyze a musical excerpt, psychoacoustic entropy is calculated directly from sound pressure waves themselves. This frames psychoacoustic entropy theory in the most general possible terms as a theory of musical harmony, enabling it to be invoked for any perceivable sound. Chapter two provides and examines many widely accepted mathematical models of the acoustics and psychoacoustics of these sound pressure waves. Chapter three introduces entropy as a precise way of measuring perceived uncertainty in sound pressure waves. Entropy is used, in combination with the acoustic and psychoacoustic models introduced in chapter two, to motivate the mathematical formulation of psychoacoustic entropy theory. Chapter four shows how to use psychoacoustic entropy theory to analyze the certain types of musical harmonies, while chapter five applies the analytical tools developed in chapter four to two short musical excerpts to influence their interpretation. Almost every form of harmonic analysis invokes some degree of mathematical reasoning. However, the limited scope of most harmonic systems used for Western common practice music greatly simplifies the necessary level of mathematical detail. Psychoacoustic entropy theory requires a greater deal of mathematical complexity due to its sheer scope as a generalized theory of musical harmony. Fortunately, under specific assumptions the theory can take on vastly simpler forms. Psychoacoustic entropy theory appears to be highly compatible with the latest scientific research in psychoacoustics. However, the theory itself should be regarded as a hypothesis and this dissertation an experiment in progress. The evaluation of psychoacoustic entropy theory as a scientific theory of human sonic perception must wait for more rigorous future research.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam
2017-01-01
The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body’s local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W(t) = A0 + ∑Aiexp(−t/τi). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm2) varies depending on the sensor’s thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100–250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants. PMID:29182567
Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam; Rodríguez de Rivera, Manuel
2017-11-28
The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body's local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W ( t ) = A ₀ + ∑A i exp( -t / τ i ). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm²) varies depending on the sensor's thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100-250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants.
Interacting partially directed self-avoiding walk: a probabilistic perspective
NASA Astrophysics Data System (ADS)
Carmona, Philippe; Nguyen, Gia Bao; Pétrélis, Nicolas; Torri, Niccolò
2018-04-01
We review some recent results obtained in the framework of the 2D interacting self-avoiding walk (ISAW). After a brief presentation of the rigorous results that have been obtained so far for ISAW we focus on the interacting partially directed self-avoiding walk (IPDSAW), a model introduced in Zwanzig and Lauritzen (1968 J. Chem. Phys. 48 3351) to decrease the mathematical complexity of ISAW. In the first part of the paper, we discuss how a new probabilistic approach based on a random walk representation (see Nguyen and Pétrélis (2013 J. Stat. Phys. 151 1099–120)) allowed for a sharp determination of the asymptotics of the free energy close to criticality (see Carmona et al (2016 Ann. Probab. 44 3234–90)). Some scaling limits of IPDSAW were conjectured in the physics literature (see e.g. Brak et al (1993 Phys. Rev. E 48 2386–96)). We discuss here the fact that all limits are now proven rigorously, i.e. for the extended regime in Carmona and Pétrélis (2016 Electron. J. Probab. 21 1–52), for the collapsed regime in Carmona et al (2016 Ann. Probab. 44 3234–90) and at criticality in Carmona and Pétrélis (2017b arxiv:1709.06448). The second part of the paper starts with the description of four open questions related to physically relevant extensions of IPDSAW. Among such extensions is the interacting prudent self-avoiding walk (IPSAW) whose configurations are those of the 2D prudent walk. We discuss the main results obtained in Pétrélis and Torri (2016 Ann. Inst. Henri Poincaré D) about IPSAW and in particular the fact that its collapse transition is proven to exist rigorously.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Statistical Analysis of Protein Ensembles
NASA Astrophysics Data System (ADS)
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
A Formal Methods Approach to the Analysis of Mode Confusion
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.
2004-01-01
The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).
Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M
2010-12-01
The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.
Structure, function, and behaviour of computational models in systems biology
2013-01-01
Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research. PMID:23721297
ERIC Educational Resources Information Center
Neri, Rebecca; Lozano, Maritza; Chang, Sandy; Herman, Joan
2016-01-01
New college and career ready standards (CCRS) have established more rigorous expectations of learning for all learners, including English learner (EL) students, than what was expected in previous standards. A common feature in these new content-area standards, such as the Common Core State Standards in English language arts and mathematics and the…
Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.
1986-09-01
respectively. Here h is a parameter which is usually related to the size of the grid associated with the finite element partitioning of Q. Then one... grid and of not at least performing serious mesh refinement studies. It also points out the usefulness of rigorous results concerning the stability...overconstrained the .1% approximate velocity field. However, by employing different grids for the ’z pressure and velocity fields, the linear-constant
Advanced Extremely High Frequency Satellite (AEHF)
2015-12-01
control their tactical and strategic forces at all levels of conflict up to and including general nuclear war, and it supports the attainment of...10195.1 10622.2 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE) that supports the AEHF SV 1-4, like all life-cycle cost...mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in methods used in building
2015-12-01
system level testing. The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
All biology is computational biology.
Markowetz, Florian
2017-03-01
Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
The sympathy of two pendulum clocks: beyond Huygens’ observations
Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin
2016-01-01
This paper introduces a modern version of the classical Huygens’ experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks—ad hoc designed and fabricated—which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit ‘sympathetic’ motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated. PMID:27020903
An analysis of the coexistence of two host species with a shared pathogen.
Chen, Zhi-Min; Price, W G
2008-06-01
Population dynamics of two-host species under direct transmission of an infectious disease or a pathogen is studied based on the Holt-Pickering mathematical model, which accounts for the influence of the pathogen on the population of the two-host species. Through rigorous analysis and a numerical scheme of study, circumstances are specified under which the shared pathogen leads to the coexistence of the two-host species in either a persistent or periodic form. This study shows the importance of intrinsic growth rates or the differences between birth rates and death rates of the two host susceptible in controlling these circumstances. It is also demonstrated that the periodicity may arise when the positive intrinsic growth rates are very small, but the periodicity is very weak which may not be observed in an empirical investigation.
A global solution to the Schrödinger equation: From Henstock to Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu
2015-09-15
One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Hayot, C; Sakka, S; Lacouture, P
2013-04-01
Saunders et al. (1953) stated that the introduction of six gait determinants (pelvic rotation, pelvic obliquity, stance knee flexion, foot and ankle mechanisms, and tibiofemoral angle) to a compass gait model (two rigid legs hinged at the hips) provides an accurate simulation of the actual trajectory of the whole body center of mass (CoM). Their respective actions could also explain the shape of the vertical ground reaction force (GRF) pattern. Saunders' approach is considered as a kinematic description of some features of gait and is subject to debate. The purpose of this study is to realize a rigorous mechanical evaluation of the gait determinants theory using an appropriated mathematical model in which specific experimental data of gait trials are introduced. We first simulate a compass-like CoM trajectory using the proposed 3D mathematical model. Then, factorizing the model to introduce successively the kinematic data related to each gait determinant, we assess their respective contribution to both the CoM trajectory and the pattern of vertical GRF at different gait speeds. The results show that the stance knee flexion significatively decreases the estimated position of the CoM during midstance. Stance knee extension and pelvic obliquity contribute to the appearance of the pattern of vertical GRF during stance. The stance ankle dorsiflexion significatively contributes to CoM vertical excursion and the ankle plantarflexion contributes to the vertical GRF during terminal stance. The largest contribution towards the minimization of the CoM vertical amplitude during the complete gait step appears when considering the foot mechanisms and the pelvic obliquity in the proposed model. Copyright © 2012 Elsevier B.V. All rights reserved.
... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...
... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
NASA Astrophysics Data System (ADS)
Holmes, Mark H.
2006-10-01
To help students grasp the intimate connections that exist between mathematics and its applications in other disciplines a library of interactive learning modules was developed. This library covers the mathematical areas normally studied by undergraduate students and is used in science courses at all levels. Moreover, the library is designed not just to provide critical connections across disciplines but to also provide longitudinal subject reinforcement as students progress in their studies. In the process of developing the modules a complete editing and publishing system was constructed that is optimized for automated maintenance and upgradeability of materials. The result is a single integrated production system for web-based educational materials. Included in this is a rigorous assessment program, involving both internal and external evaluations of each module. As will be seen, the formative evaluation obtained during the development of the library resulted in the modules successfully bridging multiple disciplines and breaking down the disciplinary barriers commonly found in their math and non-math courses.
Survey of computer programs for prediction of crash response and of its experimental validation
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1976-01-01
The author seeks to critically assess the potentialities of the mathematical and hybrid simulators which predict post-impact response of transportation vehicles. A strict rigorous numerical analysis of a complex phenomenon like crash may leave a lot to be desired with regard to the fidelity of mathematical simulation. Hybrid simulations on the other hand which exploit experimentally observed features of deformations appear to hold a lot of promise. MARC, ANSYS, NONSAP, DYCAST, ACTION, WHAM II and KRASH are among some of the simulators examined for their capabilities with regard to prediction of post impact response of vehicles. A review of these simulators reveals that much more by way of an analysis capability may be desirable than what is currently available. NASA's crashworthiness testing program in conjunction with similar programs of various other agencies, besides generating a large data base, will be equally useful in the validation of new mathematical concepts of nonlinear analysis and in the successful extension of other techniques in crashworthiness.
War of ontology worlds: mathematics, computer code, or Esperanto?
Rzhetsky, Andrey; Evans, James A
2011-09-01
The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate program. Inspired by the National Research Council's BIO2010 report, a new interdisciplinary first-year course (SCIE1000) was created, incorporating mathematics and computer programming in the context of modern science. In this study, the perceptions of biological science students enrolled in SCIE1000 in 2008 and 2009 are measured. Analysis indicates that, as a result of taking SCIE1000, biological science students gained a positive appreciation of the importance of mathematics in their discipline. However, the data revealed that SCIE1000 did not contribute positively to gains in appreciation for computing and only slightly influenced students' motivation to enroll in upper-level quantitative-based courses. Further comparisons between 2008 and 2009 demonstrated the positive effect of using genuine, real-world contexts to enhance student perceptions toward the relevance of mathematics. The results support the recommendation from BIO2010 that mathematics should be introduced to biology students in first-year courses using real-world examples, while challenging the benefits of introducing programming in first-year courses. PMID:20810961
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
Tropical atmospheric circulations with humidity effects.
Hsia, Chun-Hsiung; Lin, Chang-Shou; Ma, Tian; Wang, Shouhong
2015-01-08
The main objective of this article is to study the effect of the moisture on the planetary scale atmospheric circulation over the tropics. The modelling we adopt is the Boussinesq equations coupled with a diffusive equation of humidity, and the humidity-dependent heat source is modelled by a linear approximation of the humidity. The rigorous mathematical analysis is carried out using the dynamic transition theory. In particular, we obtain mixed transitions, also known as random transitions, as described in Ma & Wang (2010 Discrete Contin. Dyn. Syst. 26 , 1399-1417. (doi:10.3934/dcds.2010.26.1399); 2011 Adv. Atmos. Sci. 28 , 612-622. (doi:10.1007/s00376-010-9089-0)). The analysis also indicates the need to include turbulent friction terms in the model to obtain correct convection scales for the large-scale tropical atmospheric circulations, leading in particular to the right critical temperature gradient and the length scale for the Walker circulation. In short, the analysis shows that the effect of moisture lowers the magnitude of the critical thermal Rayleigh number and does not change the essential characteristics of dynamical behaviour of the system.
Hydrodynamics of steady state phloem transport with radial leakage of solute
Cabrita, Paulo; Thorpe, Michael; Huber, Gregor
2013-01-01
Long-distance phloem transport occurs under a pressure gradient generated by the osmotic exchange of water associated with solute exchange in source and sink regions. But these exchanges also occur along the pathway, and yet their physiological role has almost been ignored in mathematical models of phloem transport. Here we present a steady state model for transport phloem which allows solute leakage, based on the Navier-Stokes and convection-diffusion equations which describe fluid motion rigorously. Sieve tube membrane permeability Ps for passive solute exchange (and correspondingly, membrane reflection coefficient) influenced model results strongly, and had to lie in the bottom range of the values reported for plant cells for the results to be realistic. This smaller permeability reflects the efficient specialization of sieve tube elements, minimizing any diffusive solute loss favored by the large concentration difference across the sieve tube membrane. We also found there can be a specific reflection coefficient for which pressure profiles and sap velocities can both be similar to those predicted by the Hagen-Poiseuille equation for a completely impermeable tube. PMID:24409189
a Geometrical Chart of Altered Temporality (and Spatiality)
NASA Astrophysics Data System (ADS)
Saniga, Metod
2005-10-01
The paper presents, to our knowledge, a first fairly comprehensive and mathematically well-underpinned classification of the psychopathology of time (and space). After reviewing the most illustrative first-person accounts of "anomalous/peculiar" experiences of time (and, to a lesser degree, space) we introduce and describe in detail their algebraic geometrical model. The model features six qualitatively different types of the internal structure of time dimension and four types of that of space. As for time, the most pronounced are the ordinary "past-present-future," "present-only" ("eternal/everlasting now") and "no-present" (time "standing still") patterns. Concerning space, the most elementary are the ordinary, i.e., "here-and-there," mode and the "here-only" one ("omnipresence"). We then show what the admissible combinations of temporal and spatial psycho-patterns are and give a rigorous algebraic geometrical classification of them. The predictive power of the model is illustrated by the phenomenon of psychological time-reversal and the experiential difference between time and space. The paper ends with a brief account of some epistemological/ontological questions stemming from the approach.
2010-10-18
August 2010 was building the right game “ – World of Warcraft has 30% women (according to womengamers.com) Conclusion: – We don’t really understand why...Report of the National Academies on Informal Learning • Infancy - late adulthood: Learn about the world & develop important skills for science...Education With Rigor and Vigor – Excitement, interest, and motivation to learn about phenomena in the natural and physical world . – Generate
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
A rigorous computational approach to linear response
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Galatolo, Stefano; Nisoli, Isaia; Niu, Xiaolong
2018-03-01
We present a general setting in which the formula describing the linear response of the physical measure of a perturbed system can be obtained. In this general setting we obtain an algorithm to rigorously compute the linear response. We apply our results to expanding circle maps. In particular, we present examples where we compute, up to a pre-specified error in the L∞ -norm, the response of expanding circle maps under stochastic and deterministic perturbations. Moreover, we present an example where we compute, up to a pre-specified error in the L 1-norm, the response of the intermittent family at the boundary; i.e. when the unperturbed system is the doubling map. This work was mainly conducted during a visit of SG to Loughborough University. WB and SG would like to thank The Leverhulme Trust for supporting mutual research visits through the Network Grant IN-2014-021. SG thanks the Department of Mathematical Sciences at Loughborough University for hospitality. WB thanks Dipartimento di Matematica, Universita di Pisa. The research of SG and IN is partially supported by EU Marie-Curie IRSES ‘Brazilian-European partnership in Dynamical Systems’ (FP7-PEOPLE-2012-IRSES 318999 BREUDS). IN was partially supported by CNPq and FAPERJ. IN would like to thank the Department of Mathematics at Uppsala University and the support of the KAW grant 2013.0315.
Continuum mechanics and thermodynamics in the Hamilton and the Godunov-type formulations
NASA Astrophysics Data System (ADS)
Peshkov, Ilya; Pavelka, Michal; Romenski, Evgeniy; Grmela, Miroslav
2018-01-01
Continuum mechanics with dislocations, with the Cattaneo-type heat conduction, with mass transfer, and with electromagnetic fields is put into the Hamiltonian form and into the form of the Godunov-type system of the first-order, symmetric hyperbolic partial differential equations (SHTC equations). The compatibility with thermodynamics of the time reversible part of the governing equations is mathematically expressed in the former formulation as degeneracy of the Hamiltonian structure and in the latter formulation as the existence of a companion conservation law. In both formulations the time irreversible part represents gradient dynamics. The Godunov-type formulation brings the mathematical rigor (the local well posedness of the Cauchy initial value problem) and the possibility to discretize while keeping the physical content of the governing equations (the Godunov finite volume discretization).
Observations of fallibility in applications of modern programming methodologies
NASA Technical Reports Server (NTRS)
Gerhart, S. L.; Yelowitz, L.
1976-01-01
Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.
Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin
2015-02-01
When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.
Schlägel, Ulrike E; Lewis, Mark A
2016-12-01
Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Geometry of the perceptual space
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John
1999-09-01
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Surface conservation laws at microscopically diffuse interfaces.
Chu, Kevin T; Bazant, Martin Z
2007-11-01
In studies of interfaces with dynamic chemical composition, bulk and interfacial quantities are often coupled via surface conservation laws of excess surface quantities. While this approach is easily justified for microscopically sharp interfaces, its applicability in the context of microscopically diffuse interfaces is less theoretically well-established. Furthermore, surface conservation laws (and interfacial models in general) are often derived phenomenologically rather than systematically. In this article, we first provide a mathematically rigorous justification for surface conservation laws at diffuse interfaces based on an asymptotic analysis of transport processes in the boundary layer and derive general formulae for the surface and normal fluxes that appear in surface conservation laws. Next, we use nonequilibrium thermodynamics to formulate surface conservation laws in terms of chemical potentials and provide a method for systematically deriving the structure of the interfacial layer. Finally, we derive surface conservation laws for a few examples from diffusive and electrochemical transport.
NASA Astrophysics Data System (ADS)
Buffoni, Boris; Groves, Mark D.; Wahlén, Erik
2017-12-01
Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.
Rigorous approaches to tether dynamics in deployment and retrieval
NASA Technical Reports Server (NTRS)
Antona, Ettore
1987-01-01
Dynamics of tethers in a linearized analysis can be considered as the superposition of propagating waves. This approach permits a new way for the analysis of tether behavior during deployment and retrieval, where a tether is composed by a part at rest and a part subjected to propagation phenomena, with the separating section depending on time. The dependence on time of the separating section requires the analysis of the reflection of the waves travelling toward the part at rest. Such a reflection generates a reflected wave, whose characteristics are determined. The propagation phenomena of major interest in a tether are transverse waves and longitudinal waves, all mathematically modelled by the vibrating chord equations, if the tension is considered constant along the tether. An interesting problem also considered is concerned with the dependence of the tether tension from the longitudinal position, due to microgravity, and the influence of this dependence on the propagation waves.
NASA Astrophysics Data System (ADS)
Buffoni, Boris; Groves, Mark D.; Wahlén, Erik
2018-06-01
Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.
Interpretation of HCMM images: A regional study
NASA Technical Reports Server (NTRS)
1982-01-01
Potential users of HCMM data, especially those with only a cursory background in thermal remote sensing are familiarized with the kinds of information contained in the images that can be extracted with some reliability solely from inspection of such standard products as those generated at NASA/GSFC and now achieved in the National Space Science Data Center. Visual analysis of photoimagery is prone to various misimpressions and outright errors brought on by unawareness of the influence of physical factors as well as by sometimes misleading tonal patterns introduced during photoprocessing. The quantitative approach, which relies on computer processing of digital HCMM data, field measurements, and integration of rigorous mathematical models, can usually be used to identify, compensate for, or correct the contributions from at least some of the natural factors and those associated with photoprocessing. Color composite, day-IR, night-IR and visible images of California and Nevada are examined.
NASA Astrophysics Data System (ADS)
Bordag, M.; Geyer, B.; Klimchitskaya, G. L.; Mostepanenko, V. M.
2010-01-01
We show that in the presence of free charge carriers the definition of the frequency-dependent dielectric permittivity requires additional regularization. As an example, the dielectric permittivity of the Drude model is considered and its time-dependent counterpart is derived and analyzed. The respective electric displacement cannot be represented in terms of the standard Fourier integral. The regularization procedure allowing the circumvention of these difficulties is suggested. For the purpose of comparison it is shown that the frequency-dependent dielectric permittivity of insulators satisfies all rigorous mathematical criteria. This permits us to conclude that in the presence of free charge carriers the concept of dielectric permittivity is not as well defined as for insulators and we make a link to widely discussed puzzles in the theory of thermal Casimir force which might be caused by the use of this kind of permittivities.
Systematic design for trait introgression projects.
Cameron, John N; Han, Ye; Wang, Lizhi; Beavis, William D
2017-10-01
Using an Operations Research approach, we demonstrate design of optimal trait introgression projects with respect to competing objectives. We demonstrate an innovative approach for designing Trait Introgression (TI) projects based on optimization principles from Operations Research. If the designs of TI projects are based on clear and measurable objectives, they can be translated into mathematical models with decision variables and constraints that can be translated into Pareto optimality plots associated with any arbitrary selection strategy. The Pareto plots can be used to make rational decisions concerning the trade-offs between maximizing the probability of success while minimizing costs and time. The systematic rigor associated with a cost, time and probability of success (CTP) framework is well suited to designing TI projects that require dynamic decision making. The CTP framework also revealed that previously identified 'best' strategies can be improved to be at least twice as effective without increasing time or expenses.
A white noise approach to the Feynman integrand for electrons in random media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grothaus, M., E-mail: grothaus@mathematik.uni-kl.de; Riemann, F., E-mail: riemann@mathematik.uni-kl.de; Suryawan, H. P., E-mail: suryawan@mathematik.uni-kl.de
2014-01-15
Using the Feynman path integral representation of quantum mechanics it is possible to derive a model of an electron in a random system containing dense and weakly coupled scatterers [see F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)]. The main goal of this paper is to give a mathematically rigorous realization of the corresponding Feynman integrand in dimension one based on the theory of white noise analysis. We refine and apply a Wick formula for the product of a square-integrable function with Donsker's delta functions and usemore » a method of complex scaling. As an essential part of the proof we also establish the existence of the exponential of the self-intersection local times of a one-dimensional Brownian bridge. As a result we obtain a neat formula for the propagator with identical start and end point. Thus, we obtain a well-defined mathematical object which is used to calculate the density of states [see, e.g., F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)].« less
Mathematics education practice in Nigeria: Its impact in a post-colonial era
NASA Astrophysics Data System (ADS)
Enime, Noble O. J.
This qualitative research method of study examined the impacts of the Nigerian pre-independence era Mathematics Education Practice on the Post-Colonial era Mathematics Education Practice. The study was designed to gather qualitative information related to Pre-independence and Postcolonial era data related to Mathematics Education Practice in Nigeria (Western, Eastern and the Middle Belt) using interview questions. Data was collected through face to face interviews. Over ten themes emerged from these qualitative interview questions when data was analyzed. Some of the themes emerging from the sub questions were as follows. "Mentally mature to understand the mathematics" and "Not mentally mature to understand the mathematics", "mentally mature to understand the mathematics, with the help of others" and "Not Sure". Others were "Contented with Age of Enrollment" and "Not contented with Age of Enrollment". From the questions of type of school attended and liking of mathematics the following themes emerged: "Attended UPE (Universal Primary Education) and understood Mathematics", and "Attended Standard Education System and did not like Mathematics". Connections between the liking of mathematics and the respondents' eventual careers were seen through the following themes that emerged. "Biological Sciences based career and enjoyed High School Mathematics Experience", "Economics and Business Education based career and enjoyed High School Mathematics Experience" and five more themes. The themes, "Very helpful" and "Unhelpful" emerged from the question concerning parents and students' homework. Some of the themes emerging from the interviews were as follows: "Awesome because of method of Instruction of Mathematics", "Awesome because Mathematics was easy", "Awesome because I had a Good Teacher or Teachers" and four other themes, "Like and dislike of Mathematics", "Heavy work load", "Subject matter content" and "Rigor of instruction". More emerging themes are presented in this document in Chapter IV. The emerging themes suggested that the influence Nigerian Colonial era Mathematics Education Practice had on the independent Nigerian state is yet to completely diminish. The following are among the conclusions drawn n from the study. Student's enrollment age appeared to generally have an influence over the performance in mathematics at all levels of school. Also, students that had encouraging parents were likely to enjoy learning mathematics, while students that attended mission schools were likely to be successful in mathematics. The students whose parents were educated were likely to be successful in Mathematics.
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-01-01
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic—mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. PMID:23199912
NASA Astrophysics Data System (ADS)
Reis, T.; Phillips, T. N.
2008-12-01
In this reply to the comment by Lallemand and Luo, we defend our assertion that the alternative approach for the solution of the dispersion relation for a generalized lattice Boltzmann dispersion equation [T. Reis and T. N. Phillips, Phys. Rev. E 77, 026702 (2008)] is mathematically transparent, elegant, and easily justified. Furthermore, the rigorous perturbation analysis used by Reis and Phillips does not require the reciprocals of the relaxation parameters to be small.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Endobiogeny: a global approach to systems biology (part 1 of 2).
Lapraz, Jean-Claude; Hedayat, Kamyar M
2013-01-01
Endobiogeny is a global systems approach to human biology that may offer an advancement in clinical medicine based in scientific principles of rigor and experimentation and the humanistic principles of individualization of care and alleviation of suffering with minimization of harm. Endobiogeny is neither a movement away from modern science nor an uncritical embracing of pre-rational methods of inquiry but a synthesis of quantitative and qualitative relationships reflected in a systems-approach to life and based on new mathematical paradigms of pattern recognition.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Solving the multi-frequency electromagnetic inverse source problem by the Fourier method
NASA Astrophysics Data System (ADS)
Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi
2018-07-01
This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.
Understanding the Lomb–Scargle Periodogram
NASA Astrophysics Data System (ADS)
VanderPlas, Jacob T.
2018-05-01
The Lomb–Scargle periodogram is a well-known algorithm for detecting and characterizing periodic signals in unevenly sampled data. This paper presents a conceptual introduction to the Lomb–Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb–Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation.
Selection theory of free dendritic growth in a potential flow.
von Kurnatowski, Martin; Grillenbeck, Thomas; Kassner, Klaus
2013-04-01
The Kruskal-Segur approach to selection theory in diffusion-limited or Laplacian growth is extended via combination with the Zauderer decomposition scheme. This way nonlinear bulk equations become tractable. To demonstrate the method, we apply it to two-dimensional crystal growth in a potential flow. We omit the simplifying approximations used in a preliminary calculation for the same system [Fischaleck, Kassner, Europhys. Lett. 81, 54004 (2008)], thus exhibiting the capability of the method to extend mathematical rigor to more complex problems than hitherto accessible.
Optimal Down Regulation of mRNA Translation
NASA Astrophysics Data System (ADS)
Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2017-01-01
Down regulation of mRNA translation is an important problem in various bio-medical domains ranging from developing effective medicines for tumors and for viral diseases to developing attenuated virus strains that can be used for vaccination. Here, we study the problem of down regulation of mRNA translation using a mathematical model called the ribosome flow model (RFM). In the RFM, the mRNA molecule is modeled as a chain of n sites. The flow of ribosomes between consecutive sites is regulated by n + 1 transition rates. Given a set of feasible transition rates, that models the outcome of all possible mutations, we consider the problem of maximally down regulating protein production by altering the rates within this set of feasible rates. Under certain conditions on the feasible set, we show that an optimal solution can be determined efficiently. We also rigorously analyze two special cases of the down regulation optimization problem. Our results suggest that one must focus on the position along the mRNA molecule where the transition rate has the strongest effect on the protein production rate. However, this rate is not necessarily the slowest transition rate along the mRNA molecule. We discuss some of the biological implications of these results.
Statistical shear lag model - unraveling the size effect in hierarchical composites.
Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D
2015-05-01
Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Santos, Jander P.; Sá Barreto, F. C.
2016-01-01
Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.
Developing a Student Conception of Academic Rigor
ERIC Educational Resources Information Center
Draeger, John; del Prado Hill, Pixita; Mahler, Ronnie
2015-01-01
In this article we describe models of academic rigor from the student point of view. Drawing on a campus-wide survey, focus groups, and interviews with students, we found that students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals. These findings…
Intercellular Genomics of Subsurface Microbial Colonies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortoleva, Peter; Tuncay, Kagan; Gannon, Dennis
2007-02-14
This report summarizes progress in the second year of this project. The objective is to develop methods and software to predict the spatial configuration, properties and temporal evolution of microbial colonies in the subsurface. To accomplish this, we integrate models of intracellular processes, cell-host medium exchange and reaction-transport dynamics on the colony scale. At the conclusion of the project, we aim to have the foundations of a predictive mathematical model and software that captures the three scales of these systems – the intracellular, pore, and colony wide spatial scales. In the second year of the project, we refined our transcriptionalmore » regulatory network discovery (TRND) approach that utilizes gene expression data along with phylogenic similarity and gene ontology analyses and applied it successfully to E.coli, human B cells, and Geobacter sulfurreducens. We have developed a new Web interface, GeoGen, which is tailored to the reconstruction of microbial TRNs and solely focuses on Geobacter as one of DOE’s high priority microbes. Our developments are designed such that the frameworks for the TRND and GeoGen can readily be used for other microbes of interest to the DOE. In the context of modeling a single bacterium, we are actively pursuing both steady-state and kinetic approaches. The steady-state approach is based on a flux balance that uses maximizing biomass growth rate as its objective, subjected to various biochemical constraints, for the optimal values of reaction rates and uptake/release of metabolites. For the kinetic approach, we use Karyote, a rigorous cell model developed by us for an earlier DOE grant and the DARPA BioSPICE Project. We are also investigating the interplay between bacterial colonies and environment at both pore and macroscopic scales. The pore scale models use detailed representations for realistic porous media accounting for the distribution of grain size whereas the macroscopic models employ the Darcy-type flow equations and up-scaled advective-diffusive transport equations for chemical species. We are rigorously testing the relationship between these two scales by evaluating macroscopic parameters using the volume averaging methodology applied to pore scale model results.« less
The Dielectric Permittivity of Crystals in the Reduced Hartree-Fock Approximation
NASA Astrophysics Data System (ADS)
Cancès, Éric; Lewin, Mathieu
2010-07-01
In a recent article (Cancès et al. in Commun Math Phys 281:129-177, 2008), we have rigorously derived, by means of bulk limit arguments, a new variational model to describe the electronic ground state of insulating or semiconducting crystals in the presence of local defects. In this so-called reduced Hartree-Fock model, the ground state electronic density matrix is decomposed as {γ = γ^0_per + Q_{ν,\\varepsilon_F}}, where {γ^0_per} is the ground state density matrix of the host crystal and {Q_{ν,\\varepsilon_F}} the modification of the electronic density matrix generated by a modification ν of the nuclear charge of the host crystal, the Fermi level ɛ F being kept fixed. The purpose of the present article is twofold. First, we study in more detail the mathematical properties of the density matrix {Q_{ν,\\varepsilon_F}} (which is known to be a self-adjoint Hilbert-Schmidt operator on {L^2(mathbb{R}^3)}). We show in particular that if {int_{mathbb{R}^3} ν neq 0, Q_{ν,\\varepsilon_F}} is not trace-class. Moreover, the associated density of charge is not in {L^1(mathbb{R}^3)} if the crystal exhibits anisotropic dielectric properties. These results are obtained by analyzing, for a small defect ν, the linear and nonlinear terms of the resolvent expansion of {Q_{ν,\\varepsilon_F}}. Second, we show that, after an appropriate rescaling, the potential generated by the microscopic total charge (nuclear plus electronic contributions) of the crystal in the presence of the defect converges to a homogenized electrostatic potential solution to a Poisson equation involving the macroscopic dielectric permittivity of the crystal. This provides an alternative (and rigorous) derivation of the Adler-Wiser formula.
Ologs: a categorical framework for knowledge representation.
Spivak, David I; Kent, Robert E
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.
Ologs: A Categorical Framework for Knowledge Representation
Spivak, David I.; Kent, Robert E.
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research. PMID:22303434
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Neubeck, Claere; Shankaran, Harish; Geniza, Matthew
2013-08-08
The effects of low dose high linear energy transfer (LET) radiation on human health are of concern for both space and clinical exposures. As epidemiological data for such radiation exposures are scarce for making relevant predictions, we need to understand the mechanism of response especially in normal tissues. Our objective here is to understand the effects of heavy ion radiation on tissue homeostasis in a realistic model system. Towards this end, we exposed an in vitro three dimensional skin equivalent to low fluences of Neon (Ne) ions (300 MeV/u), and determined the differentiation profile as a function of time followingmore » exposure using immunohistochemistry. We found that Ne ion exposures resulted in transient increases in the tissue regions expressing the differentiation markers keratin 10, and filaggrin, and more subtle time-dependent effects on the number of basal cells in the epidermis. We analyzed the data using a mathematical model of the skin equivalent, to quantify the effect of radiation on cell proliferation and differentiation. The agent-based mathematical model for the epidermal layer treats the epidermis as a collection of heterogeneous cell types with different proliferation/differentiation properties. We obtained model parameters from the literature where available, and calibrated the unknown parameters to match the observed properties in unirradiated skin. We then used the model to rigorously examine alternate hypotheses regarding the effects of high LET radiation on the tissue. Our analysis indicates that Ne ion exposures induce rapid, but transient, changes in cell division, differentiation and proliferation. We have validated the modeling results by histology and quantitative reverse transcription polymerase chain reaction (qRT-PCR). The integrated approach presented here can be used as a general framework to understand the responses of multicellular systems, and can be adapted to other epithelial tissues.« less
Trescher, Saskia; Münchmeyer, Jannes; Leser, Ulf
2017-03-27
Gene regulation is one of the most important cellular processes, indispensable for the adaptability of organisms and closely interlinked with several classes of pathogenesis and their progression. Elucidation of regulatory mechanisms can be approached by a multitude of experimental methods, yet integration of the resulting heterogeneous, large, and noisy data sets into comprehensive and tissue or disease-specific cellular models requires rigorous computational methods. Recently, several algorithms have been proposed which model genome-wide gene regulation as sets of (linear) equations over the activity and relationships of transcription factors, genes and other factors. Subsequent optimization finds those parameters that minimize the divergence of predicted and measured expression intensities. In various settings, these methods produced promising results in terms of estimating transcription factor activity and identifying key biomarkers for specific phenotypes. However, despite their common root in mathematical optimization, they vastly differ in the types of experimental data being integrated, the background knowledge necessary for their application, the granularity of their regulatory model, the concrete paradigm used for solving the optimization problem and the data sets used for evaluation. Here, we review five recent methods of this class in detail and compare them with respect to several key properties. Furthermore, we quantitatively compare the results of four of the presented methods based on publicly available data sets. The results show that all methods seem to find biologically relevant information. However, we also observe that the mutual result overlaps are very low, which contradicts biological intuition. Our aim is to raise further awareness of the power of these methods, yet also to identify common shortcomings and necessary extensions enabling focused research on the critical points.
Constructing Rigorous and Broad Biosurveillance Networks for Detecting Emerging Zoonotic Outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; Powell, Dennis; LaBute, Montiago; Hyman, James M.; Rivas, Ariel; Jankowski, Mark; Berendzen, Joel; Loeppky, Jason; Manore, Carrie; Fair, Jeanne
2015-01-01
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities of potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic. PMID:25946164
An advanced model of heat and mass transfer in the protective clothing - verification
NASA Astrophysics Data System (ADS)
Łapka, P.; Furmański, P.
2016-09-01
The paper presents an advanced mathematical and numerical models of heat and mass transfer in the multi-layers protective clothing and in elements of the experimental stand subjected to either high surroundings temperature or high radiative heat flux emitted by hot objects. The model included conductive-radiative heat transfer in the hygroscopic porous fabrics and air gaps as well as conductive heat transfer in components of the stand. Additionally, water vapour diffusion in the pores and air spaces as well as phase transition of the bound water in the fabric fibres (sorption and desorption) were accounted for. The thermal radiation was treated in the rigorous way e.g.: semi-transparent absorbing, emitting and scattering fabrics were assumed a non-grey and all optical phenomena at internal or external walls were modelled. The air was assumed transparent. Complex energy and mass balance as well as optical conditions at internal or external interfaces were formulated in order to find exact values of temperatures, vapour densities and radiation intensities at these interfaces. The obtained highly non-linear coupled system of discrete equation was solve by the in-house iterative algorithm which was based on the Finite Volume Method. The model was then successfully partially verified against the results obtained from commercial software for simplified cases.
Zielinski, Michal W; McGann, Locksley E; Nychka, John A; Elliott, Janet A W
2017-11-22
The prediction of nonideal chemical potentials in aqueous solutions is important in fields such as cryobiology, where models of water and solute transport-that is, osmotic transport-are used to help develop cryopreservation protocols and where solutions contain many varied solutes and are generally highly concentrated and thus thermodynamically nonideal. In this work, we further the development of a nonideal multisolute solution theory that has found application across a broad range of aqueous systems. This theory is based on the osmotic virial equation and does not depend on multisolute data. Specifically, we derive herein a novel solute chemical potential equation that is thermodynamically consistent with the existing model, and we establish the validity of a grouped solute model for the intracellular space. With this updated solution theory, it is now possible to model cellular osmotic behavior in nonideal solutions containing multiple permeating solutes, such as those commonly encountered by cells during cryopreservation. In addition, because we show here that for the osmotic virial equation the grouped solute approach is mathematically equivalent to treating each solute separately, multisolute solutions in other applications with fixed solute mass ratios can now be treated rigorously with such a model, even when all of the solutes cannot be enumerated.
NASA Astrophysics Data System (ADS)
Cocco, Alex P.; Nakajo, Arata; Chiu, Wilson K. S.
2017-12-01
We present a fully analytical, heuristic model - the "Analytical Transport Network Model" - for steady-state, diffusive, potential flow through a 3-D network. Employing a combination of graph theory, linear algebra, and geometry, the model explicitly relates a microstructural network's topology and the morphology of its channels to an effective material transport coefficient (a general term meant to encompass, e.g., conductivity or diffusion coefficient). The model's transport coefficient predictions agree well with those from electrochemical fin (ECF) theory and finite element analysis (FEA), but are computed 0.5-1.5 and 5-6 orders of magnitude faster, respectively. In addition, the theory explicitly relates a number of morphological and topological parameters directly to the transport coefficient, whereby the distributions that characterize the structure are readily available for further analysis. Furthermore, ATN's explicit development provides insight into the nature of the tortuosity factor and offers the potential to apply theory from network science and to consider the optimization of a network's effective resistance in a mathematically rigorous manner. The ATN model's speed and relative ease-of-use offer the potential to aid in accelerating the design (with respect to transport), and thus reducing the cost, of energy materials.
Formal and physical equivalence in two cases in contemporary quantum physics
NASA Astrophysics Data System (ADS)
Fraser, Doreen
2017-08-01
The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.
Finding the way with a noisy brain.
Cheung, Allen; Vickerstaff, Robert
2010-11-11
Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.
ADM1-based methodology for the characterisation of the influent sludge in anaerobic reactors.
Huete, E; de Gracia, M; Ayesa, E; Garcia-Heras, J L
2006-01-01
This paper presents a systematic methodology to characterise the influent sludge in terms of the ADM1 components from the experimental measurements traditionally used in wastewater engineering. For this purpose, a complete characterisation of the model components in their elemental mass fractions and charge has been used, making a rigorous mass balance for all the process transformations and enabling the future connection with other unit-process models. It also makes possible the application of mathematical algorithms for the optimal characterisation of several components poorly defined in the ADM1 report. Additionally, decay and disintegration have been necessarily uncoupled so that the decay proceeds directly to hydrolysis instead of producing intermediate composites. The proposed methodology has been applied to the particular experimental work of a pilot-scale CSTR treating real sewage sludge, a mixture of primary and secondary sludge. The results obtained have shown a good characterisation of the influent reflected in good model predictions. However, its limitations for an appropriate prediction of alkalinity and carbon percentages in biogas suggest the convenience of including the elemental characterisation of the process in terms of carbon in the analytical program.
Small-Area Estimation of Spatial Access to Care and Its Implications for Policy.
Gentili, Monica; Isett, Kim; Serban, Nicoleta; Swann, Julie
2015-10-01
Local or small-area estimates to capture emerging trends across large geographic regions are critical in identifying and addressing community-level health interventions. However, they are often unavailable due to lack of analytic capabilities in compiling and integrating extensive datasets and complementing them with the knowledge about variations in state-level health policies. This study introduces a modeling approach for small-area estimation of spatial access to pediatric primary care that is data "rich" and mathematically rigorous, integrating data and health policy in a systematic way. We illustrate the sensitivity of the model to policy decision making across large geographic regions by performing a systematic comparison of the estimates at the census tract and county levels for Georgia and California. Our results show the proposed approach is able to overcome limitations of other existing models by capturing patient and provider preferences and by incorporating possible changes in health policies. The primary finding is systematic underestimation of spatial access, and inaccurate estimates of disparities across population and across geography at the county level with respect to those at the census tract level with implications on where to focus and which type of interventions to consider.
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Using GIS to generate spatially balanced random survey designs for natural resource applications.
Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B
2007-07-01
Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.
The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.
Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko
2013-11-01
Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
Curved fronts in the Belousov-Zhabotinskii reaction-diffusion systems in R2
NASA Astrophysics Data System (ADS)
Niu, Hong-Tao; Wang, Zhi-Cheng; Bu, Zhen-Hui
2018-05-01
In this paper we consider a diffusion system with the Belousov-Zhabotinskii (BZ for short) chemical reaction. Following Brazhnik and Tyson [4] and Pérez-Muñuzuri et al. [45], who predicted V-shaped fronts theoretically and discovered V-shaped fronts by experiments respectively, we give a rigorous mathematical proof of their results. We establish the existence of V-shaped traveling fronts in R2 by constructing a proper supersolution and a subsolution. Furthermore, we establish the stability of the V-shaped front in R2.
User's manual for the Macintosh version of PASCO
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Davis, Randall C.
1991-01-01
A user's manual for Macintosh PASCO is presented. Macintosh PASCO is an Apple Macintosh version of PASCO, an existing computer code for structural analysis and optimization of longitudinally stiffened composite panels. PASCO combines a rigorous buckling analysis program with a nonlinear mathematical optimization routine to minimize panel mass. Macintosh PASCO accepts the same input as mainframe versions of PASCO. As output, Macintosh PASCO produces a text file and mode shape plots in the form of Apple Macintosh PICT files. Only the user interface for Macintosh is discussed here.
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2012
2012-01-01
In the advent of the development and mass adoption of the common core state standards for English language arts and mathematics, state and local agencies have now expressed a need to the Council of Chief State School Officers (CCSSO or the Council) for assistance as they upgrade existing social studies standards to meet the practical goal of…
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-05-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-02-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various time scales is discussed.
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-11-07
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic--mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Zule, William A; Cross, Harry E; Stover, John; Pretorius, Carel
2013-01-01
Circumstantial evidence from laboratory studies, mathematical models, ecological studies and bio behavioural surveys, suggests that injection-related HIV epidemics may be averted or reversed if people who inject drugs (PWID) switch from using high dead-space to using low dead-space syringes. In laboratory experiments that simulated the injection process and rinsing with water, low dead space syringes retained 1000 times less blood than high dead space syringes. In mathematical models, switching PWID from high dead space to low dead space syringes prevents or reverses injection-related HIV epidemics. No one knows if such an intervention is feasible or what effect it would have on HIV transmission among PWID. Feasibility studies and randomized controlled trials (RCTs) will be needed to answer these questions definitively, but these studies will be very expensive and take years to complete. Rather than waiting for them to be completed, we argue for an approach similar to that used with needle and syringe programs (NSP), which were promoted and implemented before being tested more rigorously. Before implementation, rapid assessments that involve PWID will need to be conducted to ensure buy-in from PWID and other local stakeholders. This commentary summarizes the existing evidence regarding the protective effects of low dead space syringes and estimates potential impacts on HIV transmission; it describes potential barriers to transitioning PWID from high dead space to low dead space needles and syringes; and it presents strategies for overcoming these barriers. Copyright © 2012 Elsevier B.V. All rights reserved.
Computational fluid dynamics: Transition to design applications
NASA Technical Reports Server (NTRS)
Bradley, R. G.; Bhateley, I. C.; Howell, G. A.
1987-01-01
The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
Organism-level models: When mechanisms and statistics fail us
NASA Astrophysics Data System (ADS)
Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.
2014-03-01
Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.
Peer Assessment with Online Tools to Improve Student Modeling
ERIC Educational Resources Information Center
Atkins, Leslie J.
2012-01-01
Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to…
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
Agent-Centric Approach for Cybersecurity Decision-Support with Partial Observability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Chatterjee, Samrat; Paulson, Patrick R.
Generating automated cyber resilience policies for real-world settings is a challenging research problem that must account for uncertainties in system state over time and dynamics between attackers and defenders. In addition to understanding attacker and defender motives and tools, and identifying “relevant” system and attack data, it is also critical to develop rigorous mathematical formulations representing the defender’s decision-support problem under uncertainty. Game-theoretic approaches involving cyber resource allocation optimization with Markov decision processes (MDP) have been previously proposed in the literature. Moreover, advancements in reinforcement learning approaches have motivated the development of partially observable stochastic games (POSGs) in various multi-agentmore » problem domains with partial information. Recent advances in cyber-system state space modeling have also generated interest in potential applicability of POSGs for cybersecurity. However, as is the case in strategic card games such as poker, research challenges using game-theoretic approaches for practical cyber defense applications include: 1) solving for equilibrium and designing efficient algorithms for large-scale, general problems; 2) establishing mathematical guarantees that equilibrium exists; 3) handling possible existence of multiple equilibria; and 4) exploitation of opponent weaknesses. Inspired by advances in solving strategic card games while acknowledging practical challenges associated with the use of game-theoretic approaches in cyber settings, this paper proposes an agent-centric approach for cybersecurity decision-support with partial system state observability.« less
Flipping the Electromagnetic Theory classroom
NASA Astrophysics Data System (ADS)
Berger, Andrew J.
2017-08-01
Electromagnetic Theory is a required junior-year course for Optics majors at the University of Rochester. This foundational course gives students their first rigorous exposure to electromagnetic vector fields, dipole radiation patterns, Fresnel reflection/transmission coefficients, waveguided modes, Jones vectors, waveplates, birefringence, and the Lorentz model of refractive index. To increase the percentage of class time devoted to student-centered conceptual reasoning and instructor feedback, this course was recently "flipped". Nearly all of the mathematically-intensive derivations were converted to narrated screencasts ("Khan Academy" style) and made available to students through the course's learning management system. On average, the students were assigned two 10-15 minute videos to watch in advance of each lecture. An electronic survey after each tutorial encouraged reflection and counted towards the student's participation grade. Over the past three years, students have consistently rated the videos as being highly valuable. This presentation will discuss the technical aspects of creating tutorial videos and the educational tradeoffs of flipping a mathematically-intensive upper-level course. The most important advantage is the instructor's increased ability to identify and respond to student confusion, via activities that would consume too much time in a lecture-centered course. Several examples of such activities will be given. Two pitfalls to avoid are the temptation for the instructor not to update the videos from year to year and the tendency of students not to take lecture notes while watching the videos.
Mathematical analysis of the multiband BCS gap equations in superconductivity
NASA Astrophysics Data System (ADS)
Yang, Yisong
2005-01-01
In this paper, we present a mathematical analysis for the phonon-dominated multiband isotropic and anisotropic BCS gap equations at any finite temperature T. We establish the existence of a critical temperature T so that, when T
A Mathematical Framework for the Analysis of Cyber-Resilient Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M; Ferragut, Erik M; Laska, Jason A
2013-01-01
The increasingly recognized vulnerability of industrial control systems to cyber-attacks has inspired a considerable amount of research into techniques for cyber-resilient control systems. The majority of this effort involves the application of well known information security (IT) techniques to control system networks. While these efforts are important to protect the control systems that operate critical infrastructure, they are never perfectly effective. Little research has focused on the design of closed-loop dynamics that are resilient to cyber-attack. The majority of control system protection measures are concerned with how to prevent unauthorized access and protect data integrity. We believe that the abilitymore » to analyze how an attacker can effect the closed loop dynamics of a control system configuration once they have access is just as important to the overall security of a control system. To begin to analyze this problem, consistent mathematical definitions of concepts within resilient control need to be established so that a mathematical analysis of the vulnerabilities and resiliencies of a particular control system design methodology and configuration can be made. In this paper, we propose rigorous definitions for state awareness, operational normalcy, and resiliency as they relate to control systems. We will also discuss some mathematical consequences that arise from the proposed definitions. The goal is to begin to develop a mathematical framework and testable conditions for resiliency that can be used to build a sound theoretical foundation for resilient control research.« less
Nonperturbative Time Dependent Solution of a Simple Ionization Model
NASA Astrophysics Data System (ADS)
Costin, Ovidiu; Costin, Rodica D.; Lebowitz, Joel L.
2018-02-01
We present a non-perturbative solution of the Schrödinger equation {iψ_t(t,x)=-ψ_{xx}(t,x)-2(1 +α sinω t) δ(x)ψ(t,x)} , written in units in which \\hbar=2m=1, describing the ionization of a model atom by a parametric oscillating potential. This model has been studied extensively by many authors, including us. It has surprisingly many features in common with those observed in the ionization of real atoms and emission by solids, subjected to microwave or laser radiation. Here we use new mathematical methods to go beyond previous investigations and to provide a complete and rigorous analysis of this system. We obtain the Borel-resummed transseries (multi-instanton expansion) valid for all values of α, ω, t for the wave function, ionization probability, and energy distribution of the emitted electrons, the latter not studied previously for this model. We show that for large t and small α the energy distribution has sharp peaks at energies which are multiples of ω, corresponding to photon capture. We obtain small α expansions that converge for all t, unlike those of standard perturbation theory. We expect that our analysis will serve as a basis for treating more realistic systems revealing a form of universality in different emission processes.
Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael
2003-01-01
We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
Modification of Grange-Kiefer Approach for Determination of Hardenability in Eutectoid Steel
NASA Astrophysics Data System (ADS)
Sushanthi, Neethi; Maity, Joydeep
2014-12-01
In this research work, an independent mathematical modeling approach has been adopted for determination of the hardenability of steels. In this model, at first, cooling curves were generated by solving transient heat transfer equation through discretization with pure explicit finite difference scheme coupled with MATLAB-based programming considering variable thermo-physical properties of 1080 steel. Thereafter, a new fundamental approach is proposed for obtaining CCT noses as a function of volume fraction transformed through modification of Grange-Kiefer approach. The cooling curves were solved against 50 pct transformation nose of CCT diagram in order to predict hardening behavior of 1080 steel in terms of hardenability parameters (Grossmann critical diameter, D C; and ideal critical diameter, D I) and the variation of the unhardened core diameter ( D u) to diameter of steel bar ( D) ratio with diameter of the steel bar ( D). The experiments were also performed to ascertain actual D C value of 1080 steel for still water quenching. The D C value obtained by the developed model was found to match the experimental D C value with only 3 pct deviation. Therefore, the model developed in the present work can be used for direct determination of D I, D C and D u without resorting to any rigorous experimentation.
Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.
Rogers, Emily; Murrugarra, David; Heitsch, Christine
2017-07-25
Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.
Double Dutch: A Tool for Designing Combinatorial Libraries of Biological Systems.
Roehner, Nicholas; Young, Eric M; Voigt, Christopher A; Gordon, D Benjamin; Densmore, Douglas
2016-06-17
Recently, semirational approaches that rely on combinatorial assembly of characterized DNA components have been used to engineer biosynthetic pathways. In practice, however, it is not practical to assemble and test millions of pathway variants in order to elucidate how different DNA components affect the behavior of a pathway. To address this challenge, we apply a rigorous mathematical approach known as design of experiments (DOE) that can be used to construct empirical models of system behavior without testing all variants. To support this approach, we have developed a tool named Double Dutch, which uses a formal grammar and heuristic algorithms to automate the process of DOE library design. Compared to designing by hand, Double Dutch enables users to more efficiently and scalably design libraries of pathway variants that can be used in a DOE framework and uniquely provides a means to flexibly balance design considerations of statistical analysis, construction cost, and risk of homologous recombination, thereby demonstrating the utility of automating decision making when faced with complex design trade-offs.
Quasistatic elastoplasticity via Peridynamics: existence and localization
NASA Astrophysics Data System (ADS)
Kružík, Martin; Mora-Corral, Carlos; Stefanelli, Ulisse
2018-04-01
Peridynamics is a nonlocal continuum mechanical theory based on minimal regularity on the deformations. Its key trait is that of replacing local constitutive relations featuring spacial differential operators with integrals over differences of displacement fields over a suitable positive interaction range. The advantage of such perspective is that of directly including nonregular situations, in which discontinuities in the displacement field may occur. In the linearized elastic setting, the mechanical foundation of the theory and its mathematical amenability have been thoroughly analyzed in the last years. We present here the extension of Peridynamics to linearized elastoplasticity. This calls for considering the time evolution of elastic and plastic variables, as the effect of a combination of elastic energy storage and plastic energy dissipation mechanisms. The quasistatic evolution problem is variationally reformulated and solved by time discretization. In addition, by a rigorous evolutive Γ -convergence argument we prove that the nonlocal peridynamic model converges to classic local elastoplasticity as the interaction range goes to zero.
Simulation of Plasma Jet Merger and Liner Formation within the PLX- α Project
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Chen, Hsin-Chiang; Shih, Wen; Hsu, Scott
2015-11-01
Detailed numerical studies of the propagation and merger of high Mach number argon plasma jets and the formation of plasma liners have been performed using the newly developed method of Lagrangian particles (LP). The LP method significantly improves accuracy and mathematical rigor of common particle-based numerical methods such as smooth particle hydrodynamics while preserving their main advantages compared to grid-based methods. A brief overview of the LP method will be presented. The Lagrangian particle code implements main relevant physics models such as an equation of state for argon undergoing atomic physics transformation, radiation losses in thin optical limit, and heat conduction. Simulations of the merger of two plasma jets are compared with experimental data from past PLX experiments. Simulations quantify the effect of oblique shock waves, ionization, and radiation processes on the jet merger process. Results of preliminary simulations of future PLX- alpha experiments involving the ~ π / 2 -solid-angle plasma-liner configuration with 9 guns will also be presented. Partially supported by ARPA-E's ALPHA program.
Finite machines, mental procedures, and modern physics.
Lupacchini, Rossella
2007-01-01
A Turing machine provides a mathematical definition of the natural process of calculating. It rests on trust that a procedure of reason can be reproduced mechanically. Turing's analysis of the concept of mechanical procedure in terms of a finite machine convinced Gödel of the validity of the Church thesis. And yet, Gödel's later concern was that, insofar as Turing's work shows that "mental procedure cannot go beyond mechanical procedures", it would imply the same kind of limitation on human mind. He therefore deems Turing's argument to be inconclusive. The question then arises as to which extent a computing machine operating by finite means could provide an adequate model of human intelligence. It is argued that a rigorous answer to this question can be given by developing Turing's considerations on the nature of mental processes. For Turing such processes are the consequence of physical processes and he seems to be led to the conclusion that quantum mechanics could help to find a more comprehensive explanation of them.
Large Deviations for Nonlocal Stochastic Neural Fields
2014-01-01
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297
Capacity planning of a wide-sense nonblocking generalized survivable network
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2006-06-01
Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.
OLED emission zone measurement with high accuracy
NASA Astrophysics Data System (ADS)
Danz, N.; MacCiarnain, R.; Michaelis, D.; Wehlus, T.; Rausch, A. F.; Wächter, C. A.; Reusch, T. C. G.
2013-09-01
Highly efficient state of the art organic light-emitting diodes (OLED) comprise thin emitting layers with thicknesses in the order of 10 nm. The spatial distribution of the photon generation rate, i.e. the profile of the emission zone, inside these layers is of interest for both device efficiency analysis and characterization of charge recombination processes. It can be accessed experimentally by reverse simulation of far-field emission pattern measurements. Such a far-field pattern is the sum of individual emission patterns associated with the corresponding positions inside the active layer. Based on rigorous electromagnetic theory the relation between far-field pattern and emission zone is modeled as a linear problem. This enables a mathematical analysis to be applied to the cases of single and double emitting layers in the OLED stack as well as to pattern measurements in air or inside the substrate. From the results, guidelines for optimum emitter - cathode separation and for selecting the best experimental approach are obtained. Limits for the maximum spatial resolution can be derived.
On the breakdown of the curvature perturbation ζ during reheating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Merve Tarman; Kaya, Ali; Kutluk, Emine Seyma, E-mail: merve.tarman@boun.edu.tr, E-mail: ali.kaya@boun.edu.tr, E-mail: seymakutluk@gmail.com
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0φ-dot =, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0φ=, where φ denotes the inflaton perturbation, breaks down when 0φ-dot =. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflatonmore » fluctuation φ in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.« less
The Madelung Picture as a Foundation of Geometric Quantum Theory
NASA Astrophysics Data System (ADS)
Reddiger, Maik
2017-10-01
Despite its age, quantum theory still suffers from serious conceptual difficulties. To create clarity, mathematical physicists have been attempting to formulate quantum theory geometrically and to find a rigorous method of quantization, but this has not resolved the problem. In this article we argue that a quantum theory recursing to quantization algorithms is necessarily incomplete. To provide an alternative approach, we show that the Schrödinger equation is a consequence of three partial differential equations governing the time evolution of a given probability density. These equations, discovered by Madelung, naturally ground the Schrödinger theory in Newtonian mechanics and Kolmogorovian probability theory. A variety of far-reaching consequences for the projection postulate, the correspondence principle, the measurement problem, the uncertainty principle, and the modeling of particle creation and annihilation are immediate. We also give a speculative interpretation of the equations following Bohm, Vigier and Tsekov, by claiming that quantum mechanical behavior is possibly caused by gravitational background noise.
On the breakdown of the curvature perturbation ζ during reheating
NASA Astrophysics Data System (ADS)
Tarman Algan, Merve; Kaya, Ali; Seyma Kutluk, Emine
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0dot phi=, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0varphi=, where varphi denotes the inflaton perturbation, breaks down when 0dot phi=. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflaton fluctuation varphi in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.
Pattern formation in mass conserving reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Brauns, Fridtjof; Halatek, Jacob; Frey, Erwin
We present a rigorous theoretical framework able to generalize and unify pattern formation for quantitative mass conserving reaction-diffusion models. Mass redistribution controls chemical equilibria locally. Separation of diffusive mass redistribution on the level of conserved species provides a general mathematical procedure to decompose complex reaction-diffusion systems into effectively independent functional units, and to reveal the general underlying bifurcation scenarios. We apply this framework to Min protein pattern formation and identify the mechanistic roles of both involved protein species. MinD generates polarity through phase separation, whereas MinE takes the role of a control variable regulating the existence of MinD phases. Hence, polarization and not oscillations is the generic core dynamics of Min proteins in vivo. This establishes an intrinsic mechanistic link between the Min system and a broad class of intracellular pattern forming systems based on bistability and phase separation (wave-pinning). Oscillations are facilitated by MinE redistribution and can be understood mechanistically as relaxation oscillations of the polarization direction.
Reduction of uncertainty in global black carbon direct radiative forcing constrained by observations
NASA Astrophysics Data System (ADS)
Wang, R.; Balkanski, Y.; Boucher, O.; Ciais, P.; Schuster, G. L.; Chevallier, F.; Samset, B. H.; Valari, M.; Liu, J.; Tao, S.
2017-12-01
Black carbon (BC) absorbs sunlight and contributes to global warming. However, the size of this effect, namely the direct radiative forcing (DRF), ranges from +0.1 to +1.0 W m-2, largely due to discrepancies between modeled and observed BC radiation absorption. Studies that adjusted emissions to correct biases of models resulted in a revised upward estimate of the BC DRF. However, the observation-based BC RF was not optimized against observations in a rigorous mathematical manner, because uncertainties in emissions and the representativeness errors due to use of coarse-resolution models were not fully assessed. Here we simulated the absorption of solar radiation by BC from all sources at the 10-km resolution by combining a nested aerosol model with a downscaling method. The normalized mean bias in BC radiation absorption was reduced from -51% to -24% in Asia and from -57% to -50% elsewhere. We applied a Bayesian method that account for model, representativeness and observational uncertainties to estimate the BC RF and its uncertainty. Using the high-resolution model reduces uncertainty in BC DRF from -101%/+152% to -70%/+71% over Asia and from -83%/+108% to -64%/+68% over other continental regions. We derived an observation-based BC DRF of 0.61 Wm-2 (0.16 to 1.40 as 90% confidence) as our best estimate.
Methods in Symbolic Computation and p-Adic Valuations of Polynomials
NASA Astrophysics Data System (ADS)
Guan, Xiao
Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.
ERIC Educational Resources Information Center
Yilmaz, Suha; Tekin-Dede, Ayse
2016-01-01
Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
ERIC Educational Resources Information Center
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
ERIC Educational Resources Information Center
Schwerdtfeger, Sara
2017-01-01
This study examined the differences in knowledge of mathematical modeling between a group of elementary preservice teachers and a group of elementary inservice teachers. Mathematical modeling has recently come to the forefront of elementary mathematics classrooms because of the call to add mathematical modeling tasks in mathematics classes through…
A Case Study of Teachers' Development of Well-Structured Mathematical Modelling Activities
ERIC Educational Resources Information Center
Stohlmann, Micah; Maiorca, Cathrine; Allen, Charlie
2017-01-01
This case study investigated how three teachers developed mathematical modelling activities integrated with content standards through participation in a course on mathematical modelling. The class activities involved experiencing a mathematical modelling activity, reading and rating example mathematical modelling activities, reading articles about…
Connections between the Sznajd model with general confidence rules and graph theory
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2012-10-01
The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).
Mathematical Modelling Approach in Mathematics Education
ERIC Educational Resources Information Center
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
Treatment of charge singularities in implicit solvent models.
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-21
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2 A for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
Inferring the source of evaporated waters using stable H and O isotopes
NASA Astrophysics Data System (ADS)
Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.
2017-12-01
Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.
Approximate direct georeferencing in national coordinates
NASA Astrophysics Data System (ADS)
Legat, Klaus
Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.
Treatment of charge singularities in implicit solvent models
NASA Astrophysics Data System (ADS)
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-01
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2Å for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
ERIC Educational Resources Information Center
Lowe, James; Carter, Merilyn; Cooper, Tom
2018-01-01
Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…
NASA Astrophysics Data System (ADS)
Shahbari, Juhaina Awawdeh
2018-07-01
The current study examines whether the engagement of mathematics teachers in modelling activities and subsequent changes in their conceptions about these activities affect their beliefs about mathematics. The sample comprised 52 mathematics teachers working in small groups in four modelling activities. The data were collected from teachers' Reports about features of each activity, interviews and questionnaires on teachers' beliefs about mathematics. The findings indicated changes in teachers' conceptions about the modelling activities. Most teachers referred to the first activity as a mathematical problem but emphasized only the mathematical notions or the mathematical operations in the modelling process; changes in their conceptions were gradual. Most of the teachers referred to the fourth activity as a mathematical problem and emphasized features of the whole modelling process. The results of the interviews indicated that changes in the teachers' conceptions can be attributed to structure of the activities, group discussions, solution paths and elicited models. These changes about modelling activities were reflected in teachers' beliefs about mathematics. The quantitative findings indicated that the teachers developed more constructive beliefs about mathematics after engagement in the modelling activities and that the difference was significant, however there was no significant difference regarding changes in their traditional beliefs.
Structural efficiency studies of corrugated compression panels with curved caps and beaded webs
NASA Technical Reports Server (NTRS)
Davis, R. C.; Mills, C. T.; Prabhakaran, R.; Jackson, L. R.
1984-01-01
Curved cross-sectional elements are employed in structural concepts for minimum-mass compression panels. Corrugated panel concepts with curved caps and beaded webs are optimized by using a nonlinear mathematical programming procedure and a rigorous buckling analysis. These panel geometries are shown to have superior structural efficiencies compared with known concepts published in the literature. Fabrication of these efficient corrugation concepts became possible by advances made in the art of superplastically forming of metals. Results of the mass optimization studies of the concepts are presented as structural efficiency charts for axial compression.
2016-01-01
Information is a precise concept that can be defined mathematically, but its relationship to what we call ‘knowledge’ is not always made clear. Furthermore, the concepts ‘entropy’ and ‘information’, while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology. PMID:26857663
Quantum probability and quantum decision-making.
Yukalov, V I; Sornette, D
2016-01-13
A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).
Gravitation. [Book on general relativity
NASA Technical Reports Server (NTRS)
Misner, C. W.; Thorne, K. S.; Wheeler, J. A.
1973-01-01
This textbook on gravitation physics (Einstein's general relativity or geometrodynamics) is designed for a rigorous full-year course at the graduate level. The material is presented in two parallel tracks in an attempt to divide key physical ideas from more complex enrichment material to be selected at the discretion of the reader or teacher. The full book is intended to provide competence relative to the laws of physics in flat space-time, Einstein's geometric framework for physics, applications with pulsars and neutron stars, cosmology, the Schwarzschild geometry and gravitational collapse, gravitational waves, experimental tests of Einstein's theory, and mathematical concepts of differential geometry.
Lectures on General Relativity, Cosmology and Quantum Black Holes
NASA Astrophysics Data System (ADS)
Ydri, Badis
2017-07-01
This book is a rigorous text for students in physics and mathematics requiring an introduction to the implications and interpretation of general relativity in areas of cosmology. Readers of this text will be well prepared to follow the theoretical developments in the field and undertake research projects as part of an MSc or PhD programme. This ebook contains interactive Q&A technology, allowing the reader to interact with the text and reveal answers to selected exercises posed by the author within the book. This feature may not function in all formats and on reading devices.
Experimental Demonstration of Observability and Operability of Robustness of Coherence
NASA Astrophysics Data System (ADS)
Zheng, Wenqiang; Ma, Zhihao; Wang, Hengyan; Fei, Shao-Ming; Peng, Xinhua
2018-06-01
Quantum coherence is an invaluable physical resource for various quantum technologies. As a bona fide measure in quantifying coherence, the robustness of coherence (ROC) is not only mathematically rigorous, but also physically meaningful. We experimentally demonstrate the witness-observable and operational feature of the ROC in a multiqubit nuclear magnetic resonance system. We realize witness measurements by detecting the populations of quantum systems in one trial. The approach may also apply to physical systems compatible with ensemble or nondemolition measurements. Moreover, we experimentally show that the ROC quantifies the advantage enabled by a quantum state in a phase discrimination task.
Domingo-Félez, Carlos; Pellicer-Nàcher, Carles; Petersen, Morten S; Jensen, Marlene M; Plósz, Benedek G; Smets, Barth F
2017-01-01
Nitrous oxide (N 2 O), a by-product of biological nitrogen removal during wastewater treatment, is produced by ammonia-oxidizing bacteria (AOB) and heterotrophic denitrifying bacteria (HB). Mathematical models are used to predict N 2 O emissions, often including AOB as the main N 2 O producer. Several model structures have been proposed without consensus calibration procedures. Here, we present a new experimental design that was used to calibrate AOB-driven N 2 O dynamics of a mixed culture. Even though AOB activity was favoured with respect to HB, oxygen uptake rates indicated HB activity. Hence, rigorous experimental design for calibration of autotrophic N 2 O production from mixed cultures is essential. The proposed N 2 O production pathways were examined using five alternative process models confronted with experimental data inferred. Individually, the autotrophic and heterotrophic denitrification pathway could describe the observed data. In the best-fit model, which combined two denitrification pathways, the heterotrophic was stronger than the autotrophic contribution to N 2 O production. Importantly, the individual contribution of autotrophic and heterotrophic to the total N 2 O pool could not be unambiguously elucidated solely based on bulk N 2 O measurements. Data on NO would increase the practical identifiability of N 2 O production pathways. Biotechnol. Bioeng. 2017;114: 132-140. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael S. Zhdanov
2005-03-09
The research during the first year of the project was focused on developing the foundations of a new geophysical technique for mineral exploration and mineral discrimination, based on electromagnetic (EM) methods. The proposed new technique is based on examining the spectral induced polarization effects in electromagnetic data using modern distributed acquisition systems and advanced methods of 3-D inversion. The analysis of IP phenomena is usually based on models with frequency dependent complex conductivity distribution. One of the most popular is the Cole-Cole relaxation model. In this progress report we have constructed and analyzed a different physical and mathematical model ofmore » the IP effect based on the effective-medium theory. We have developed a rigorous mathematical model of multi-phase conductive media, which can provide a quantitative tool for evaluation of the type of mineralization, using the conductivity relaxation model parameters. The parameters of the new conductivity relaxation model can be used for discrimination of the different types of rock formations, which is an important goal in mineral exploration. The solution of this problem requires development of an effective numerical method for EM forward modeling in 3-D inhomogeneous media. During the first year of the project we have developed a prototype 3-D IP modeling algorithm using the integral equation (IP) method. Our IE forward modeling code INTEM3DIP is based on the contraction IE method, which improves the convergence rate of the iterative solvers. This code can handle various types of sources and receivers to compute the effect of a complex resistivity model. We have tested the working version of the INTEM3DIP code for computer simulation of the IP data for several models including a southwest US porphyry model and a Kambalda-style nickel sulfide deposit. The numerical modeling study clearly demonstrates how the various complex resistivity models manifest differently in the observed EM data. These modeling studies lay a background for future development of the IP inversion method, directed at determining the electrical conductivity and the intrinsic chargeability distributions, as well as the other parameters of the relaxation model simultaneously. The new technology envisioned in this proposal, will be used for the discrimination of different rocks, and in this way will provide an ability to distinguish between uneconomic mineral deposits and the location of zones of economic mineralization and geothermal resources.« less
The 24-Hour Mathematical Modeling Challenge
ERIC Educational Resources Information Center
Galluzzo, Benjamin J.; Wendt, Theodore J.
2015-01-01
Across the mathematics curriculum there is a renewed emphasis on applications of mathematics and on mathematical modeling. Providing students with modeling experiences beyond the ordinary classroom setting remains a challenge, however. In this article, we describe the 24-hour Mathematical Modeling Challenge, an extracurricular event that exposes…
Modelling dynamic changes in blood flow and volume in the cerebral vasculature.
Payne, S J; El-Bouri, W K
2018-08-01
The cerebral microvasculature plays a key role in the transport of blood and the delivery of nutrients to the cells that perform brain function. Although recent advances in experimental imaging techniques mean that its structure and function can be interrogated to very small length scales, allowing individual vessels to be mapped to a fraction of 1 μm, these techniques currently remain confined to animal models. In-vivo human data can only be obtained at a much coarser length scale, of order 1 mm, meaning that mathematical models of the microvasculature play a key role in interpreting flow and metabolism data. However, there are close to 10,000 vessels even within a single voxel of size 1 mm 3 . Given the number of vessels present within a typical voxel and the complexity of the governing equations for flow and volume changes, it is computationally challenging to solve these in full, particularly when considering dynamic changes, such as those found in response to neural activation. We thus consider here the governing equations and some of the simplifications that have been proposed in order more rigorously to justify in what generations of blood vessels these approximations are valid. We show that two approximations (neglecting the advection term and assuming a quasi-steady state solution for blood volume) can be applied throughout the cerebral vasculature and that two further approximations (a simple first order differential relationship between inlet and outlet flows and inlet and outlet pressures, and matching of static pressure at nodes) can be applied in vessels smaller than approximately 1 mm in diameter. We then show how these results can be applied in solving flow fields within cerebral vascular networks providing a simplified yet rigorous approach to solving dynamic flow fields and compare the results to those obtained with alternative approaches. We thus provide a framework to model cerebral blood flow and volume within the cerebral vasculature that can be used, particularly at sub human imaging length scales, to provide greater insight into the behaviour of blood flow and volume in the cerebral vasculature. Copyright © 2018 Elsevier Inc. All rights reserved.
On the convergence of the coupled-wave approach for lamellar diffraction gratings
NASA Technical Reports Server (NTRS)
Li, Lifeng; Haggans, Charles W.
1992-01-01
Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.
A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations
NASA Astrophysics Data System (ADS)
Lou, Yijun; Zhao, Xiao-Qiang
2017-04-01
There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.
Rotation and anisotropy of galaxies revisited
NASA Astrophysics Data System (ADS)
Binney, James
2005-11-01
The use of the tensor virial theorem (TVT) as a diagnostic of anisotropic velocity distributions in galaxies is revisited. The TVT provides a rigorous global link between velocity anisotropy, rotation and shape, but the quantities appearing in it are not easily estimated observationally. Traditionally, use has been made of a centrally averaged velocity dispersion and the peak rotation velocity. Although this procedure cannot be rigorously justified, tests on model galaxies show that it works surprisingly well. With the advent of integral-field spectroscopy it is now possible to establish a rigorous connection between the TVT and observations. The TVT is reformulated in terms of sky-averages, and the new formulation is tested on model galaxies.
Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A
2015-11-01
The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...
2013-01-01
This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less
Synthesis meets theory: Past, present and future of rational chemistry
NASA Astrophysics Data System (ADS)
Fianchini, Mauro
2017-11-01
Chemical synthesis has its roots in the empirical approach of alchemy. Nonetheless, the birth of the scientific method, the technical and technological advances (exploiting revolutionary discoveries in physics) and the improved management and sharing of growing databases greatly contributed to the evolution of chemistry from an esoteric ground into a mature scientific discipline during these last 400 years. Furthermore, thanks to the evolution of computational resources, platforms and media in the last 40 years, theoretical chemistry has added to the puzzle the final missing tile in the process of "rationalizing" chemistry. The use of mathematical models of chemical properties, behaviors and reactivities is nowadays ubiquitous in literature. Theoretical chemistry has been successful in the difficult task of complementing and explaining synthetic results and providing rigorous insights when these are otherwise unattainable by experiment. The first part of this review walks the reader through a concise historical overview on the evolution of the "model" in chemistry. Salient milestones have been highlighted and briefly discussed. The second part focuses more on the general description of recent state-of-the-art computational techniques currently used worldwide by chemists to produce synergistic models between theory and experiment. Each section is complemented by key-examples taken from the literature that illustrate the application of the technique discussed therein.
Rigor of cell fate decision by variable p53 pulses and roles of cooperative gene expression by p53
Murakami, Yohei; Takada, Shoji
2012-01-01
Upon DNA damage, the cell fate decision between survival and apoptosis is largely regulated by p53-related networks. Recent experiments found a series of discrete p53 pulses in individual cells, which led to the hypothesis that the cell fate decision upon DNA damage is controlled by counting the number of p53 pulses. Under this hypothesis, Sun et al. (2009) modeled the Bax activation switch in the apoptosis signal transduction pathway that can rigorously “count” the number of uniform p53 pulses. Based on experimental evidence, here we use variable p53 pulses with Sun et al.’s model to investigate how the variability in p53 pulses affects the rigor of the cell fate decision by the pulse number. Our calculations showed that the experimentally anticipated variability in the pulse sizes reduces the rigor of the cell fate decision. In addition, we tested the roles of the cooperativity in PUMA expression by p53, finding that lower cooperativity is plausible for more rigorous cell fate decision. This is because the variability in the p53 pulse height is more amplified in PUMA expressions with more cooperative cases. PMID:27857606
ERIC Educational Resources Information Center
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
Annual Perspectives in Mathematics Education 2016: Mathematical Modeling and Modeling Mathematics
ERIC Educational Resources Information Center
Hirsch, Christian R., Ed.; McDuffie, Amy Roth, Ed.
2016-01-01
Mathematical modeling plays an increasingly important role both in real-life applications--in engineering, business, the social sciences, climate study, advanced design, and more--and within mathematics education itself. This 2016 volume of "Annual Perspectives in Mathematics Education" ("APME") focuses on this key topic from a…
Mathematical Modeling: A Bridge to STEM Education
ERIC Educational Resources Information Center
Kertil, Mahmut; Gurel, Cem
2016-01-01
The purpose of this study is making a theoretical discussion on the relationship between mathematical modeling and integrated STEM education. First of all, STEM education perspective and the construct of mathematical modeling in mathematics education is introduced. A review of literature is provided on how mathematical modeling literature may…
NASA Astrophysics Data System (ADS)
Khusna, H.; Heryaningsih, N. Y.
2018-01-01
The aim of this research was to examine mathematical modeling ability who learn mathematics by using SAVI approach. This research was a quasi-experimental research with non-equivalent control group designed by using purposive sampling technique. The population of this research was the state junior high school students in Lembang while the sample consisted of two class at 8th grade. The instrument used in this research was mathematical modeling ability. Data analysis of this research was conducted by using SPSS 20 by Windows. The result showed that students’ ability of mathematical modeling who learn mathematics by using SAVI approach was better than students’ ability of mathematical modeling who learn mathematics using conventional learning.
Towards a Credibility Assessment of Models and Simulations
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Green, Lawrence L.; Luckring, James M.; Morrison, Joseph H.; Tripathi, Ram K.; Zang, Thomas A.
2008-01-01
A scale is presented to evaluate the rigor of modeling and simulation (M&S) practices for the purpose of supporting a credibility assessment of the M&S results. The scale distinguishes required and achieved levels of rigor for a set of M&S elements that contribute to credibility including both technical and process measures. The work has its origins in an interest within NASA to include a Credibility Assessment Scale in development of a NASA standard for models and simulations.
ERIC Educational Resources Information Center
Zbiek, Rose Mary; Conner, Annamarie
2006-01-01
Views of mathematical modeling in empirical, expository, and curricular references typically capture a relationship between real-world phenomena and mathematical ideas from the perspective that competence in mathematical modeling is a clear goal of the mathematics curriculum. However, we work within a curricular context in which mathematical…
An Investigation of Mathematical Modeling with Pre-Service Secondary Mathematics Teachers
ERIC Educational Resources Information Center
Thrasher, Emily Plunkett
2016-01-01
The goal of this thesis was to investigate and enhance our understanding of what occurs while pre-service mathematics teachers engage in a mathematical modeling unit that is broadly based upon mathematical modeling as defined by the Common Core State Standards for Mathematics (National Governors Association Center for Best Practices & Council…
Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities
Bardhan, Jaydeep P.
2014-01-01
In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics. PMID:25505358
Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities.
Bardhan, Jaydeep P
2013-12-01
In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.
Gradient models in molecular biophysics: progress, challenges, opportunities
NASA Astrophysics Data System (ADS)
Bardhan, Jaydeep P.
2013-12-01
In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g., molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding nonlocal dielectric response. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain, and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost 40 years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The review concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Di Poto, Cristina; Pannell, Lewis K.; Mechref, Yehia; Wang, Yue; Ressom, Habtom W.
2013-01-01
Motivation: Liquid chromatography-mass spectrometry (LC-MS) has been widely used for profiling expression levels of biomolecules in various ‘-omic’ studies including proteomics, metabolomics and glycomics. Appropriate LC-MS data preprocessing steps are needed to detect true differences between biological groups. Retention time (RT) alignment, which is required to ensure that ion intensity measurements among multiple LC-MS runs are comparable, is one of the most important yet challenging preprocessing steps. Current alignment approaches estimate RT variability using either single chromatograms or detected peaks, but do not simultaneously take into account the complementary information embedded in the entire LC-MS data. Results: We propose a Bayesian alignment model for LC-MS data analysis. The alignment model provides estimates of the RT variability along with uncertainty measures. The model enables integration of multiple sources of information including internal standards and clustered chromatograms in a mathematically rigorous framework. We apply the model to LC-MS metabolomic, proteomic and glycomic data. The performance of the model is evaluated based on ground-truth data, by measuring correlation of variation, RT difference across runs and peak-matching performance. We demonstrate that Bayesian alignment model improves significantly the RT alignment performance through appropriate integration of relevant information. Availability and implementation: MATLAB code, raw and preprocessed LC-MS data are available at http://omics.georgetown.edu/alignLCMS.html Contact: hwr@georgetown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24013927
Reflective Modeling in Teacher Education.
ERIC Educational Resources Information Center
Shealy, Barry E.
This paper describes mathematical modeling activities from a secondary mathematics teacher education course taken by fourth-year university students. Experiences with mathematical modeling are viewed as important in helping teachers develop a more intuitive understanding of mathematics, generate and evaluate mathematical interpretations, and…
Peer Review of EPA's Draft BMDS Document: Exponential ...
BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsoulakis, Markos
2014-08-09
Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less
BOOK REVIEW: Vortex Methods: Theory and Practice
NASA Astrophysics Data System (ADS)
Cottet, G.-H.; Koumoutsakos, P. D.
2001-03-01
The book Vortex Methods: Theory and Practice presents a comprehensive account of the numerical technique for solving fluid flow problems. It provides a very nice balance between the theoretical development and analysis of the various techniques and their practical implementation. In fact, the presentation of the rigorous mathematical analysis of these methods instills confidence in their implementation. The book goes into some detail on the more recent developments that attempt to account for viscous effects, in particular the presence of viscous boundary layers in some flows of interest. The presentation is very readable, with most points illustrated with well-chosen examples, some quite sophisticated. It is a very worthy reference book that should appeal to a large body of readers, from those interested in the mathematical analysis of the methods to practitioners of computational fluid dynamics. The use of the book as a text is compromised by its lack of exercises for students, but it could form the basis of a graduate special topics course. Juan Lopez
Crisis in science: in search for new theoretical foundations.
Schroeder, Marcin J
2013-09-01
Recognition of the need for theoretical biology more than half century ago did not bring substantial progress in this direction. Recently, the need for new methods in science, including physics became clear. The breakthrough should be sought in answering the question "What is life?", which can help to explain the mechanisms of consciousness and consequently give insight into the way we comprehend reality. This could help in the search for new methods in the study of both physical and biological phenomena. However, to achieve this, new theoretical discipline will have to be developed with a very general conceptual framework and rigor of mathematical reasoning, allowing it to assume the leading role in science. Since its foundations are in the recognition of the role of life and consciousness in the epistemic process, it could be called biomathics. The prime candidates proposed here for being the fundamental concepts for biomathics are 'information' and 'information integration', with an appropriately general mathematical formalism. Copyright © 2013 Elsevier Ltd. All rights reserved.
Unique geologic insights from "non-unique" gravity and magnetic interpretation
Saltus, R.W.; Blakely, R.J.
2011-01-01
Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are always possible. The rigorous mathematical label of "nonuniqueness" can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this article is to present a practical perspective on the theoretical non-uniqueness of potential-field interpretation in geology. There are multiple ways to approach and constrain potential-field studies to produce significant, robust, and definitive results. The "non-uniqueness" of potential-field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.
A new mathematical formulation of the line-by-line method in case of weak line overlapping
NASA Technical Reports Server (NTRS)
Ishov, Alexander G.; Krymova, Natalie V.
1994-01-01
A rigorous mathematical proof is presented for multiline representation on the equivalent width of a molecular band which consists in the general case of n overlapping spectral lines. The multiline representation includes a principal term and terms of minor significance. The principal term is the equivalent width of the molecular band consisting of the same n nonoverlapping spectral lines. The terms of minor significance take into consideration the overlapping of two, three and more spectral lines. They are small in case of the weak overlapping of spectral lines in the molecular band. The multiline representation can be easily generalized for optically inhomogeneous gas media and holds true for combinations of molecular bands. If the band lines overlap weakly the standard formulation of line-by-line method becomes too labor-consuming. In this case the multiline representation permits line-by-line calculations to be performed more effectively. Other useful properties of the multiline representation are pointed out.
Primary School Pre-Service Mathematics Teachers' Views on Mathematical Modeling
ERIC Educational Resources Information Center
Karali, Diren; Durmus, Soner
2015-01-01
The current study aimed to identify the views of pre-service teachers, who attended a primary school mathematics teaching department but did not take mathematical modeling courses. The mathematical modeling activity used by the pre-service teachers was developed with regards to the modeling activities utilized by Lesh and Doerr (2003) in their…
A Regional Seismic Travel Time Model for North America
2010-09-01
velocity at the Moho, the mantle velocity gradient, and the average crustal velocity. After tomography across Eurasia, rigorous tests find that Pn...velocity gradient, and the average crustal velocity. After tomography across Eurasia rigorous tests find that Pn travel time residuals are reduced...and S-wave velocity in the crustal layers and in the upper mantle. A good prior model is essential because the RSTT tomography inversion is invariably
Topological Isomorphisms of Human Brain and Financial Market Networks
Vértes, Petra E.; Nicol, Ruth M.; Chapman, Sandra C.; Watkins, Nicholas W.; Robertson, Duncan A.; Bullmore, Edward T.
2011-01-01
Although metaphorical and conceptual connections between the human brain and the financial markets have often been drawn, rigorous physical or mathematical underpinnings of this analogy remain largely unexplored. Here, we apply a statistical and graph theoretic approach to the study of two datasets – the time series of 90 stocks from the New York stock exchange over a 3-year period, and the fMRI-derived time series acquired from 90 brain regions over the course of a 10-min-long functional MRI scan of resting brain function in healthy volunteers. Despite the many obvious substantive differences between these two datasets, graphical analysis demonstrated striking commonalities in terms of global network topological properties. Both the human brain and the market networks were non-random, small-world, modular, hierarchical systems with fat-tailed degree distributions indicating the presence of highly connected hubs. These properties could not be trivially explained by the univariate time series statistics of stock price returns. This degree of topological isomorphism suggests that brains and markets can be regarded broadly as members of the same family of networks. The two systems, however, were not topologically identical. The financial market was more efficient and more modular – more highly optimized for information processing – than the brain networks; but also less robust to systemic disintegration as a result of hub deletion. We conclude that the conceptual connections between brains and markets are not merely metaphorical; rather these two information processing systems can be rigorously compared in the same mathematical language and turn out often to share important topological properties in common to some degree. There will be interesting scientific arbitrage opportunities in further work at the graph-theoretically mediated interface between systems neuroscience and the statistical physics of financial markets. PMID:22007161
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli
2017-05-01
This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.
ERIC Educational Resources Information Center
Mumcu, Hayal Yavuz
2016-01-01
The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…
ERIC Educational Resources Information Center
Cetinkaya, Bulent; Kertil, Mahmut; Erbas, Ayhan Kursat; Korkmaz, Himmet; Alacaci, Cengiz; Cakiroglu, Erdinc
2016-01-01
Adopting a multitiered design-based research perspective, this study examines pre-service secondary mathematics teachers' developing conceptions about (a) the nature of mathematical modeling in simulations of "real life" problem solving, and (b) pedagogical principles and strategies needed to teach mathematics through modeling. Unlike…
Evolution of Mathematics Teachers' Pedagogical Knowledge When They Are Teaching through Modeling
ERIC Educational Resources Information Center
Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Alacaci, Cengiz; Cakiroglu, Erdinc; Cetinkaya, Bulent
2017-01-01
Use of mathematical modeling in mathematics education has been receiving significant attention as a way to develop students' mathematical knowledge and skills. As effective use of modeling in classes depends on the competencies of teachers we need to know more about the nature of teachers' knowledge to use modeling in mathematics education and how…
ERIC Educational Resources Information Center
Horton, Robert M.; Leonard, William H.
2005-01-01
In science, inquiry is used as students explore important and interesting questions concerning the world around them. In mathematics, one contemporary inquiry approach is to create models that describe real phenomena. Creating mathematical models using spreadsheets can help students learn at deep levels in both science and mathematics, and give…
Fetisova, Z G
2004-01-01
In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.
Howard Brenner's Legacy for Biological Transport Processes
NASA Astrophysics Data System (ADS)
Nitsche, Johannes
2014-11-01
This talk discusses the manner in which Howard Brenner's theoretical contributions have had, and long will have, strong and direct impact on the understanding of transport processes occurring in biological systems. His early work on low Reynolds number resistance/mobility coefficients of arbitrarily shaped particles, and particles near walls and in pores, is an essential component of models of hindered diffusion through many types of membranes and tissues, and convective transport in microfluidic diagnostic systems. His seminal contributions to macrotransport (coarse-graining, homogenization) theory presaged the growing discipline of multiscale modeling. For biological systems they represent the key to infusing diffusion models of a wide variety of tissues with a sound basis in their microscopic structure and properties, often over a hierarchy of scales. Both scientific currents are illustrated within the concrete context of diffusion models of drug/chemical diffusion through the skin. This area of theory, which is key to transdermal drug development and risk assessment of chemical exposure, has benefitted very directly from Brenner's contributions. In this as in other areas, Brenner's physicochemical insight, mathematical virtuosity, drive for fully justified analysis free of ad hoc assumptions, quest for generality, and impeccable exposition, have consistently elevated the level of theoretical understanding and presentation. We close with anecdotes showing how his personal qualities and warmth helped to impart high standards of rigor to generations of grateful research students. Authors are Johannes M. Nitsche, Ludwig C. Nitsche and Gerald B. Kasting.
Mandibular kinematics represented by a non-orthogonal floating axis joint coordinate system.
Leader, Joseph K; Boston, J Robert; Debski, Richard E; Rudy, Thomas E
2003-02-01
There are many methods used to represent joint kinematics (e.g., roll, pitch, and yaw angles; instantaneous center of rotation; kinematic center; helical axis). Often in biomechanics internal landmarks are inferred from external landmarks. This study represents mandibular kinematics using a non-orthogonal floating axis joint coordinate system based on 3-D geometric models with parameters that are "clinician friendly" and mathematically rigorous. Kinematics data for two controls were acquired from passive fiducial markers attached to a custom dental clutch. The geometric models were constructed from MRI data. The superior point along the arc of the long axis of the condyle was used to define the coordinate axes. The kinematic data and geometric models were registered through fiducial markers visible during both protocols. The mean absolute maxima across the subjects for sagittal rotation, coronal rotation, axial rotation, medial-lateral translation, anterior-posterior translation, and inferior-superior translation were 34.10 degrees, 1.82 degrees, 1.14 degrees, 2.31, 21.07, and 6.95 mm, respectively. All the parameters, except for one subject's axial rotation, were reproducible across two motion recording sessions. There was a linear correlation between sagittal rotation and translation, the dominant motion plane, with approximately 1.5 degrees of rotation per millimeter of translation. The novel approach of combining the floating axis system with geometric models succinctly described mandibular kinematics with reproducible and clinician friendly parameters.
Rigorous simulations of a helical core fiber by the use of transformation optics formalism.
Napiorkowski, Maciej; Urbanczyk, Waclaw
2014-09-22
We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.
Mathematical Modeling and Pure Mathematics
ERIC Educational Resources Information Center
Usiskin, Zalman
2015-01-01
Common situations, like planning air travel, can become grist for mathematical modeling and can promote the mathematical ideas of variables, formulas, algebraic expressions, functions, and statistics. The purpose of this article is to illustrate how the mathematical modeling that is present in everyday situations can be naturally embedded in…
ERIC Educational Resources Information Center
Zeytun, Aysel Sen; Cetinkaya, Bulent; Erbas, Ayhan Kursat
2017-01-01
This paper investigates how prospective teachers develop mathematical models while they engage in modeling tasks. The study was conducted in an undergraduate elective course aiming to improve prospective teachers' mathematical modeling abilities, while enhancing their pedagogical knowledge for the integrating of modeling tasks into their future…
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).