ERIC Educational Resources Information Center
Petrilli, Salvatore John, Jr.
2009-01-01
Historians of mathematics considered the nineteenth century to be the Golden Age of mathematics. During this time period many areas of mathematics, such as algebra and geometry, were being placed on rigorous foundations. Another area of mathematics which experienced fundamental change was analysis. The drive for rigor in calculus began in 1797…
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
Rigorous Science: a How-To Guide.
Casadevall, Arturo; Fang, Ferric C
2016-11-08
Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word "rigor" is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. Copyright © 2016 Casadevall and Fang.
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-03-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.
David crighton, 1942-2000: a commentary on his career and his influence on aeroacoustic theory
NASA Astrophysics Data System (ADS)
Ffowcs Williams, John E.
David Crighton, a greatly admired figure in fluid mechanics, Head of the Department of Applied Mathematics and Theoretical Physics at Cambridge, and Master of Jesus College, Cambridge, died at the peak of his career. He had made important contributions to the theory of waves generated by unsteady flow. Crighton's work was always characterized by the application of rigorous mathematical approximations to fluid mechanical idealizations of practically relevant problems. At the time of his death, he was certainly the most influential British applied mathematical figure, and his former collaborators and students form a strong school that continues his special style of mathematical application. Rigorous analysis of well-posed aeroacoustical problems was transformed by David Crighton.
Rigorous Science: a How-To Guide
Fang, Ferric C.
2016-01-01
ABSTRACT Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. PMID:27834205
NASA Astrophysics Data System (ADS)
Parumasur, N.; Willie, R.
2008-09-01
We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.
ERIC Educational Resources Information Center
Utah State Office of Education, 2011
2011-01-01
Utah has adopted more rigorous mathematics standards known as the Utah Mathematics Core Standards. They are the foundation of the mathematics curriculum for the State of Utah. The standards include the skills and understanding students need to succeed in college and careers. They include rigorous content and application of knowledge and reflect…
Science and Mathematics Advanced Placement Exams: Growth and Achievement over Time
ERIC Educational Resources Information Center
Judson, Eugene
2017-01-01
Rapid growth of Advanced Placement (AP) exams in the last 2 decades has been paralleled by national enthusiasm to promote availability and rigor of science, technology, engineering, and mathematics (STEM). Trends were examined in STEM AP to evaluate and compare growth and achievement. Analysis included individual STEM subjects and disaggregation…
NASA Astrophysics Data System (ADS)
Hamid, H.
2018-01-01
The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.
ERIC Educational Resources Information Center
Easey, Michael
2013-01-01
This paper explores the decline in boys' participation in post-compulsory rigorous mathematics using the perspectives of eight experienced teachers at an independent, boys' College located in Brisbane, Queensland. This study coincides with concerns regarding the decline in suitably qualified tertiary graduates with requisite mathematical skills…
Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS
NASA Astrophysics Data System (ADS)
Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.
2017-04-01
To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.
Student’s rigorous mathematical thinking based on cognitive style
NASA Astrophysics Data System (ADS)
Fitriyani, H.; Khasanah, U.
2017-12-01
The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.
Mathematical Rigor vs. Conceptual Change: Some Early Results
NASA Astrophysics Data System (ADS)
Alexander, W. R.
2003-05-01
Results from two different pedagogical approaches to teaching introductory astronomy at the college level will be presented. The first of these approaches is a descriptive, conceptually based approach that emphasizes conceptual change. This descriptive class is typically an elective for non-science majors. The other approach is a mathematically rigorous treatment that emphasizes problem solving and is designed to prepare students for further study in astronomy. The mathematically rigorous class is typically taken by science majors. It also fulfills an elective science requirement for these science majors. The Astronomy Diagnostic Test version 2 (ADT 2.0) was used as an assessment instrument since the validity and reliability have been investigated by previous researchers. The ADT 2.0 was administered as both a pre-test and post-test to both groups. Initial results show no significant difference between the two groups in the post-test. However, there is a slightly greater improvement for the descriptive class between the pre and post testing compared to the mathematically rigorous course. There was great care to account for variables. These variables included: selection of text, class format as well as instructor differences. Results indicate that the mathematically rigorous model, doesn't improve conceptual understanding any better than the conceptual change model. Additional results indicate that there is a similar gender bias in favor of males that has been measured by previous investigators. This research has been funded by the College of Science and Mathematics at James Madison University.
Matter Gravitates, but Does Gravity Matter?
ERIC Educational Resources Information Center
Groetsch, C. W.
2011-01-01
The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…
Mathematics interventions for children and adolescents with Down syndrome: a research synthesis.
Lemons, C J; Powell, S R; King, S A; Davidson, K A
2015-08-01
Many children and adolescents with Down syndrome fail to achieve proficiency in mathematics. Researchers have suggested that tailoring interventions based on the behavioural phenotype may enhance efficacy. The research questions that guided this review were (1) what types of mathematics interventions have been empirically evaluated with children and adolescents with Down syndrome?; (2) do the studies demonstrate sufficient methodological rigor?; (3) is there evidence of efficacy for the evaluated mathematics interventions?; and (4) to what extent have researchers considered aspects of the behavioural phenotype in selecting, designing and/or implementing mathematics interventions for children and adolescents with Down syndrome? Nine studies published between 1989 and 2012 were identified for inclusion. Interventions predominantly focused on early mathematics skills and reported positive outcomes. However, no study met criteria for methodological rigor. Further, no authors explicitly considered the behavioural phenotype. Additional research using rigorous experimental designs is needed to evaluate the efficacy of mathematics interventions for children and adolescents with Down syndrome. Suggestions for considering the behavioural phenotype in future research are provided. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Historical mathematics in the French eighteenth century.
Richards, Joan L
2006-12-01
At least since the seventeenth century, the strange combination of epistemological certainty and ontological power that characterizes mathematics has made it a major focus of philosophical, social, and cultural negotiation. In the eighteenth century, all of these factors were at play as mathematical thinkers struggled to assimilate and extend the analysis they had inherited from the seventeenth century. A combination of educational convictions and historical assumptions supported a humanistic mathematics essentially defined by its flexibility and breadth. This mathematics was an expression of l'esprit humain, which was unfolding in a progressive historical narrative. The French Revolution dramatically altered the historical and educational landscapes that had supported this eighteenth-century approach, and within thirty years Augustin Louis Cauchy had radically reconceptualized and restructured mathematics to be rigorous rather than narrative.
ERIC Educational Resources Information Center
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo
2012-01-01
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Group Practices: A New Way of Viewing CSCL
ERIC Educational Resources Information Center
Stahl, Gerry
2017-01-01
The analysis of "group practices" can make visible the work of novices learning how to inquire in science or mathematics. These ubiquitous practices are invisibly taken for granted by adults, but can be observed and rigorously studied in adequate traces of online collaborative learning. Such an approach contrasts with traditional…
A Constructive Response to "Where Mathematics Comes From."
ERIC Educational Resources Information Center
Schiralli, Martin; Sinclair, Nathalie
2003-01-01
Reviews the Lakoff and Nunez's book, "Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (2000)," which provided many mathematics education researchers with a novel and startling perspective on mathematical thinking. Suggests that several of the book's flaws can be addressed through a more rigorous establishment of…
The role of a posteriori mathematics in physics
NASA Astrophysics Data System (ADS)
MacKinnon, Edward
2018-05-01
The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.
Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors
ERIC Educational Resources Information Center
Brogt, Erik; Draeger, John D.
2015-01-01
We discuss a model of academic rigor and apply this to a general education introductory astronomy course. We argue that even without central tenets of professional astronomy-the use of mathematics--the course can still be considered academically rigorous when expectations, goals, assessments, and curriculum are properly aligned.
Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory
ERIC Educational Resources Information Center
Andraos, John
2015-01-01
A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…
Teaching the Concept of Breakdown Point in Simple Linear Regression.
ERIC Educational Resources Information Center
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
What We Do: A Multiple Case Study from Mathematics Coaches' Perspectives
ERIC Educational Resources Information Center
Kane, Barbara Ann
2013-01-01
Teachers face new challenges when they teach a more rigorous mathematics curriculum than one to which they are accustomed. The rationale for this particular study originated from watching teachers struggle with understanding mathematical content and pedagogical practices. Mathematics coaches can address teachers' concerns through sustained,…
NASA Astrophysics Data System (ADS)
Hidayat, D.; Nurlaelah, E.; Dahlan, J. A.
2017-09-01
The ability of mathematical creative and critical thinking are two abilities that need to be developed in the learning of mathematics. Therefore, efforts need to be made in the design of learning that is capable of developing both capabilities. The purpose of this research is to examine the mathematical creative and critical thinking ability of students who get rigorous mathematical thinking (RMT) approach and students who get expository approach. This research was quasi experiment with control group pretest-posttest design. The population were all of students grade 11th in one of the senior high school in Bandung. The result showed that: the achievement of mathematical creative and critical thinking abilities of student who obtain RMT is better than students who obtain expository approach. The use of Psychological tools and mediation with criteria of intentionality, reciprocity, and mediated of meaning on RMT helps students in developing condition in critical and creative processes. This achievement contributes to the development of integrated learning design on students’ critical and creative thinking processes.
ERIC Educational Resources Information Center
Jackson, Christa; Jong, Cindy
2017-01-01
Teaching mathematics for equity is critical because it provides opportunities for all students, especially those who have been traditionally marginalised, to learn mathematics that is rigorous and relevant to their lives. This article reports on our work, as mathematics teacher educators, on exposing and engaging 60 elementary preservice teachers…
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
Statistical Analysis of Protein Ensembles
NASA Astrophysics Data System (ADS)
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
A Mathematical Evaluation of the Core Conductor Model
Clark, John; Plonsey, Robert
1966-01-01
This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155
The impact of rigorous mathematical thinking as learning method toward geometry understanding
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-05-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding. RMT is a unique realization of the cognitive conceptual construction approach based on Mediated Learning Experience (MLE) theory by Feurstein and Vygotsky’s sociocultural theory. This was quasi experimental research which was comparing the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning method toward conceptual understanding of Junior High School students. The data was analyzed by using Independent t-test and obtained a significant difference of mean value between experimental and control class on geometry conceptual understanding. Further, by semi-structure interview known that students taught by RMT had deeper conceptual understanding than students who were taught by conventional way. By these result known that Rigorous Mathematical Thinking (RMT) as learning method have positive impact toward Geometry conceptual understanding.
Secondary School Advanced Mathematics, Chapter 3, Formal Geometry. Student's Text.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This text is the second of five in the Secondary School Advanced Mathematics (SSAM) series which was designed to meet the needs of students who have completed the Secondary School Mathematics (SSM) program, and wish to continue their study of mathematics. This volume is devoted to a rigorous development of theorems in plane geometry from 22…
ERIC Educational Resources Information Center
Chard, David J.; Baker, Scott K.; Clarke, Ben; Jungjohann, Kathleen; Davis, Karen; Smolkowski, Keith
2008-01-01
Concern about poor mathematics achievement in U.S. schools has increased in recent years. In part, poor achievement may be attributed to a lack of attention to early instruction and missed opportunities to build on young children's early understanding of mathematics. This study examined the development and feasibility testing of a kindergarten…
ERIC Educational Resources Information Center
Gersten, Russell
2016-01-01
In this commentary, the author reflects on four studies that have greatly expanded the knowledge base on effective interventions in mathematics, and he provides four rigorous experimental studies of approaches for students likely to experience difficulties learning mathematics over a large grade-level span (pre-K to 4th grade). All of the…
ERIC Educational Resources Information Center
Seeley, Cathy
2004-01-01
This article addresses some important issues in mathematics instruction at the middle and secondary levels, including the structuring of a district's mathematics program; the choice of textbooks and use of calculators in the classroom; the need for more rigorous lesson planning practices; and the dangers of teaching to standardized tests rather…
Advanced Mathematical Thinking
ERIC Educational Resources Information Center
Dubinsky, Ed; McDonald, Michael A.; Edwards, Barbara S.
2005-01-01
In this article we propose the following definition for advanced mathematical thinking: Thinking that requires deductive and rigorous reasoning about mathematical notions that are not entirely accessible to us through our five senses. We argue that this definition is not necessarily tied to a particular kind of educational experience; nor is it…
Crystal Growth and Fluid Mechanics Problems in Directional Solidification
NASA Technical Reports Server (NTRS)
Tanveer, Saleh A.; Baker, Gregory R.; Foster, Michael R.
2001-01-01
Our work in directional solidification has been in the following areas: (1) Dynamics of dendrites including rigorous mathematical analysis of the resulting equations; (2) Examination of the near-structurally unstable features of the mathematically related Hele-Shaw dynamics; (3) Numerical studies of steady temperature distribution in a vertical Bridgman device; (4) Numerical study of transient effects in a vertical Bridgman device; (5) Asymptotic treatment of quasi-steady operation of a vertical Bridgman furnace for large Rayleigh numbers and small Biot number in 3D; and (6) Understanding of Mullins-Sererka transition in a Bridgman device with fluid dynamics is accounted for.
Teaching Mathematics to Civil Engineers
ERIC Educational Resources Information Center
Sharp, J. J.; Moore, E.
1977-01-01
This paper outlines a technique for teaching a rigorous course in calculus and differential equations which stresses applicability of the mathematics to problems in civil engineering. The method involves integration of subject matter and team teaching. (SD)
Survey of computer programs for prediction of crash response and of its experimental validation
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1976-01-01
The author seeks to critically assess the potentialities of the mathematical and hybrid simulators which predict post-impact response of transportation vehicles. A strict rigorous numerical analysis of a complex phenomenon like crash may leave a lot to be desired with regard to the fidelity of mathematical simulation. Hybrid simulations on the other hand which exploit experimentally observed features of deformations appear to hold a lot of promise. MARC, ANSYS, NONSAP, DYCAST, ACTION, WHAM II and KRASH are among some of the simulators examined for their capabilities with regard to prediction of post impact response of vehicles. A review of these simulators reveals that much more by way of an analysis capability may be desirable than what is currently available. NASA's crashworthiness testing program in conjunction with similar programs of various other agencies, besides generating a large data base, will be equally useful in the validation of new mathematical concepts of nonlinear analysis and in the successful extension of other techniques in crashworthiness.
ERIC Educational Resources Information Center
Cobbs, Joyce Bernice
2014-01-01
The literature on minority student achievement indicates that Black students are underrepresented in advanced mathematics courses. Advanced mathematics courses offer students the opportunity to engage with challenging curricula, experience rigorous instruction, and interact with quality teachers. The middle school years are particularly…
Community College Pathways: A Descriptive Report of Summative Assessments and Student Learning
ERIC Educational Resources Information Center
Strother, Scott; Sowers, Nicole
2014-01-01
Carnegie's Community College Pathways (CCP) offers two pathways, Statway® and Quantway®, that reduce the amount of time required to complete developmental mathematics and earn college-level mathematics credit. The Pathways aim to improve student success in mathematics while maintaining rigorous content, pedagogy, and learning outcomes. It is…
Teacher Efficacy of High School Mathematics Co-Teachers
ERIC Educational Resources Information Center
Rimpola, Raquel C.
2011-01-01
High school mathematics inclusion classes help provide all students the access to rigorous curriculum. This study provides information about the teacher efficacy of high school mathematics co-teachers. It considers the influence of the amount of collaborative planning time on the efficacy of co-teachers. A quantitative research design was used,…
Mathematical Rigor in the Common Core
ERIC Educational Resources Information Center
Hull, Ted H.; Balka, Don S.; Miles, Ruth Harbin
2013-01-01
A whirlwind of activity surrounds the topic of teaching and learning mathematics. The driving forces are a combination of changes in assessment and advances in technology that are being spurred on by the introduction of content in the Common Core State Standards for Mathematical Practice. Although the issues are certainly complex, the same forces…
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
STEM Pathways: Examining Persistence in Rigorous Math and Science Course Taking
NASA Astrophysics Data System (ADS)
Ashford, Shetay N.; Lanehart, Rheta E.; Kersaint, Gladis K.; Lee, Reginald S.; Kromrey, Jeffrey D.
2016-12-01
From 2006 to 2012, Florida Statute §1003.4156 required middle school students to complete electronic personal education planners (ePEPs) before promotion to ninth grade. The ePEP helped them identify programs of study and required high school coursework to accomplish their postsecondary education and career goals. During the same period Florida required completion of the ePEP, Florida's Career and Professional Education Act stimulated a rapid increase in the number of statewide high school career academies. Students with interests in STEM careers created STEM-focused ePEPs and may have enrolled in STEM career academies, which offered a unique opportunity to improve their preparedness for the STEM workforce through the integration of rigorous academic and career and technical education courses. This study examined persistence of STEM-interested (i.e., those with expressed interest in STEM careers) and STEM-capable (i.e., those who completed at least Algebra 1 in eighth grade) students ( n = 11,248), including those enrolled in STEM career academies, in rigorous mathematics and science course taking in Florida public high schools in comparison with the national cohort of STEM-interested students to measure the influence of K-12 STEM education efforts in Florida. With the exception of multi-race students, we found that Florida's STEM-capable students had lower persistence in rigorous mathematics and science course taking than students in the national cohort from ninth to eleventh grade. We also found that participation in STEM career academies did not support persistence in rigorous mathematics and science courses, a prerequisite for success in postsecondary STEM education and careers.
Marghetis, Tyler; Núñez, Rafael
2013-04-01
The canonical history of mathematics suggests that the late 19th-century "arithmetization" of calculus marked a shift away from spatial-dynamic intuitions, grounding concepts in static, rigorous definitions. Instead, we argue that mathematicians, both historically and currently, rely on dynamic conceptualizations of mathematical concepts like continuity, limits, and functions. In this article, we present two studies of the role of dynamic conceptual systems in expert proof. The first is an analysis of co-speech gesture produced by mathematics graduate students while proving a theorem, which reveals a reliance on dynamic conceptual resources. The second is a cognitive-historical case study of an incident in 19th-century mathematics that suggests a functional role for such dynamism in the reasoning of the renowned mathematician Augustin Cauchy. Taken together, these two studies indicate that essential concepts in calculus that have been defined entirely in abstract, static terms are nevertheless conceptualized dynamically, in both contemporary and historical practice. Copyright © 2013 Cognitive Science Society, Inc.
A Rigorous Treatment of Energy Extraction from a Rotating Black Hole
NASA Astrophysics Data System (ADS)
Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.
2009-05-01
The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.
ERIC Educational Resources Information Center
Jitendra, Asha K.; Petersen-Brown, Shawna; Lein, Amy E.; Zaslofsky, Anne F.; Kunkel, Amy K.; Jung, Pyung-Gang; Egan, Andrea M.
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et…
ERIC Educational Resources Information Center
Jehopio, Peter J.; Wesonga, Ronald
2017-01-01
Background: The main objective of the study was to examine the relevance of engineering mathematics to the emerging industries. The level of abstraction, the standard of rigor, and the depth of theoretical treatment are necessary skills expected of a graduate engineering technician to be derived from mathematical knowledge. The question of whether…
Linking Literacy and Mathematics: The Support for Common Core Standards for Mathematical Practice
ERIC Educational Resources Information Center
Swanson, Mary; Parrott, Martha
2013-01-01
In a new era of Common Core State Standards (CCSS), teachers are expected to provide more rigorous, coherent, and focused curriculum at every grade level. To respond to the call for higher expectations across the curriculum and certainly within reading, writing, and mathematics, educators should work closely together to create mathematically…
An Informal History of Formal Proofs: From Vigor to Rigor?
ERIC Educational Resources Information Center
Galda, Klaus
1981-01-01
The history of formal mathematical proofs is sketched out, starting with the Greeks. Included in this document is a chronological guide to mathematics and the world, highlighting major events in the world and important mathematicians in corresponding times. (MP)
Dividing by Zero: Exploring Null Results in a Mathematics Professional Development Program
ERIC Educational Resources Information Center
Hill, Heather C.; Corey, Douglas Lyman; Jacob, Robin T.
2018-01-01
Background/Context: Since 2002, U.S. federal funding for educational research has favored the development and rigorous testing of interventions designed to improve student outcomes. However, recent reviews suggest that a large fraction of the programs developed and rigorously tested in the past decade have shown null results on student outcomes…
Underprepared Students' Performance on Algebra in a Double-Period High School Mathematics Program
ERIC Educational Resources Information Center
Martinez, Mara V.; Bragelman, John; Stoelinga, Timothy
2016-01-01
The primary goal of the Intensified Algebra I (IA) program is to enable mathematically underprepared students to successfully complete Algebra I in 9th grade and stay on track to meet increasingly rigorous high school mathematics graduation requirements. The program was designed to bring a range of both cognitive and non-cognitive supports to bear…
Investigation of possible observable e ects in a proposed theory of physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freidan, Daniel
2015-03-31
The work supported by this grant produced rigorous mathematical results on what is possible in quantum field theory. Quantum field theory is the well-established mathematical language for fundamental particle physics, for critical phenomena in condensed matter physics, and for Physical Mathematics (the numerous branches of Mathematics that have benefitted from ideas, constructions, and conjectures imported from Theoretical Physics). Proving rigorous constraints on what is possible in quantum field theories thus guides the field, puts actual constraints on what is physically possible in physical or mathematical systems described by quantum field theories, and saves the community the effort of trying tomore » do what is proved impossible. Results were obtained in two dimensional qft (describing, e.g., quantum circuits) and in higher dimensional qft. Rigorous bounds were derived on basic quantities in 2d conformal field theories, i.e., in 2d critical phenomena. Conformal field theories are the basic objects in quantum field theory, the scale invariant theories describing renormalization group fixed points from which all qfts flow. The first known lower bounds on the 2d boundary entropy were found. This is the entropy- information content- in junctions in critical quantum circuits. For dimensions d > 2, a no-go theorem was proved on the possibilities of Cauchy fields, which are the analogs of the holomorphic fields in d = 2 dimensions, which have had enormously useful applications in Physics and Mathematics over the last four decades. This closed o the possibility of finding analogously rich theories in dimensions above 2. The work of two postdoctoral research fellows was partially supported by this grant. Both have gone on to tenure track positions.« less
NASA Technical Reports Server (NTRS)
Tanveer, S.; Foster, M. R.
2002-01-01
We report progress in three areas of investigation related to dendritic crystal growth. Those items include: 1. Selection of tip features dendritic crystal growth; 2) Investigation of nonlinear evolution for two-sided model; and 3) Rigorous mathematical justification.
NASA Astrophysics Data System (ADS)
Blanchard, Philippe; Hellmich, Mario; Ługiewicz, Piotr; Olkiewicz, Robert
Quantum mechanics is the greatest revision of our conception of the character of the physical world since Newton. Consequently, David Hilbert was very interested in quantum mechanics. He and John von Neumann discussed it frequently during von Neumann's residence in Göttingen. He published in 1932 his book Mathematical Foundations of Quantum Mechanics. In Hilbert's opinion it was the first exposition of quantum mechanics in a mathematically rigorous way. The pioneers of quantum mechanics, Heisenberg and Dirac, neither had use for rigorous mathematics nor much interest in it. Conceptually, quantum theory as developed by Bohr and Heisenberg is based on the positivism of Mach as it describes only observable quantities. It first emerged as a result of experimental data in the form of statistical observations of quantum noise, the basic concept of quantum probability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
User's manual for the Macintosh version of PASCO
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Davis, Randall C.
1991-01-01
A user's manual for Macintosh PASCO is presented. Macintosh PASCO is an Apple Macintosh version of PASCO, an existing computer code for structural analysis and optimization of longitudinally stiffened composite panels. PASCO combines a rigorous buckling analysis program with a nonlinear mathematical optimization routine to minimize panel mass. Macintosh PASCO accepts the same input as mainframe versions of PASCO. As output, Macintosh PASCO produces a text file and mode shape plots in the form of Apple Macintosh PICT files. Only the user interface for Macintosh is discussed here.
The Menu for Every Young Mathematician's Appetite
ERIC Educational Resources Information Center
Legnard, Danielle S.; Austin, Susan L.
2012-01-01
Math Workshop offers differentiated instruction to foster a deep understanding of rich, rigorous mathematics that is attainable by all learners. The inquiry-based model provides a menu of multilevel math tasks, within the daily math block, that focus on similar mathematical content. Math Workshop promotes a culture of engagement and…
Math Interventions for Students with Autism Spectrum Disorder: A Best-Evidence Synthesis
ERIC Educational Resources Information Center
King, Seth A.; Lemons, Christopher J.; Davidson, Kimberly A.
2016-01-01
Educators need evidence-based practices to assist students with disabilities in meeting increasingly rigorous standards in mathematics. Students with autism spectrum disorder (ASD) are increasingly expected to demonstrate learning of basic and advanced mathematical concepts. This review identifies math intervention studies involving children and…
Control Engineering, System Theory and Mathematics: The Teacher's Challenge
ERIC Educational Resources Information Center
Zenger, K.
2007-01-01
The principles, difficulties and challenges in control education are discussed and compared to the similar problems in the teaching of mathematics and systems science in general. The difficulties of today's students to appreciate the classical teaching of engineering disciplines, which are based on rigorous and scientifically sound grounds, are…
A Qualitative Approach to Enzyme Inhibition
ERIC Educational Resources Information Center
Waldrop, Grover L.
2009-01-01
Most general biochemistry textbooks present enzyme inhibition by showing how the basic Michaelis-Menten parameters K[subscript m] and V[subscript max] are affected mathematically by a particular type of inhibitor. This approach, while mathematically rigorous, does not lend itself to understanding how inhibition patterns are used to determine the…
ERIC Educational Resources Information Center
Dempsey, Michael
2009-01-01
If students are in an advanced mathematics class, then at some point they enjoyed mathematics and looked forward to learning and practicing it. There is no reason that this passion and enjoyment should ever be lost because the subject becomes more difficult or rigorous. This author, who teaches advanced precalculus to high school juniors,…
Nonlinear analysis of a model of vascular tumour growth and treatment
NASA Astrophysics Data System (ADS)
Tao, Youshan; Yoshida, Norio; Guo, Qian
2004-05-01
We consider a mathematical model describing the evolution of a vascular tumour in response to traditional chemotherapy. The model is a free boundary problem for a system of partial differential equations governing intratumoural drug concentration, cancer cell density and blood vessel density. Tumour cells consist of two types of competitive cells that have different proliferation rates and different sensitivities to drugs. The balance between cell proliferation and death generates a velocity field that drives tumour cell movement. The tumour surface is a moving boundary. The purpose of this paper is to establish a rigorous mathematical analysis of the model for studying the dynamics of intratumoural blood vessels and to explore drug dosage for the successful treatment of a tumour. We also study numerically the competitive effects of the two cell types on tumour growth.
Handbook of applied mathematics for engineers and scientists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, M.
1991-12-31
This book is intended to be reference for applications of mathematics in a wide range of topics of interest to engineers and scientists. An unusual feature of this book is that it covers a large number of topics from elementary algebra, trigonometry, and calculus to computer graphics and cybernetics. The level of mathematics covers high school through about the junior level of an engineering curriculum in a major univeristy. Throughout, the emphasis is on applications of mathematics rather than on rigorous proofs.
Modeling of composite beams and plates for static and dynamic analysis
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Atilgan, Ali R.; Lee, Bok Woo
1990-01-01
A rigorous theory and corresponding computational algorithms was developed for a variety of problems regarding the analysis of composite beams and plates. The modeling approach is intended to be applicable to both static and dynamic analysis of generally anisotropic, nonhomogeneous beams and plates. Development of a theory for analysis of the local deformation of plates was the major focus. Some work was performed on global deformation of beams. Because of the strong parallel between beams and plates, the two were treated together as thin bodies, especially in cases where it will clarify the meaning of certain terminology and the motivation behind certain mathematical operations.
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
¡Enséname! Teaching Each Other to Reason through Math in the Second Grade
ERIC Educational Resources Information Center
Schmitz, Lindsey
2016-01-01
This action research sought to evaluate the effect of peer teaching structures across subgroups of students differentiated by language and mathematical skill ability. These structures were implemented in an effort to maintain mathematical rigor while building my students' academic language capacity. More specifically, the study investigated peer…
ERIC Educational Resources Information Center
Camacho, Erika T.; Holmes, Raquell M.; Wirkus, Stephen A.
2015-01-01
This chapter describes how sustained mentoring together with rigorous collaborative learning and community building contributed to successful mathematical research and individual growth in the Applied Mathematical Sciences Summer Institute (AMSSI), a program that focused on women, underrepresented minorities, and individuals from small teaching…
Water Bottle Designs and Measures
ERIC Educational Resources Information Center
Carmody, Heather Gramberg
2010-01-01
The increase in the diversity of students and the complexity of their needs can be a rich addition to a mathematics classroom. The challenge for teachers is to find a way to include students' interests and creativity in a way that allows for rigorous mathematics. One method of incorporating the diversity is the development of "open-ended…
Time-ordered exponential on the complex plane and Gell-Mann—Low formula as a mathematical theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futakuchi, Shinichiro; Usui, Kouta
2016-04-15
The time-ordered exponential representation of a complex time evolution operator in the interaction picture is studied. Using the complex time evolution, we prove the Gell-Mann—Low formula under certain abstract conditions, in mathematically rigorous manner. We apply the abstract results to quantum electrodynamics with cutoffs.
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Reis, T.; Phillips, T. N.
2008-12-01
In this reply to the comment by Lallemand and Luo, we defend our assertion that the alternative approach for the solution of the dispersion relation for a generalized lattice Boltzmann dispersion equation [T. Reis and T. N. Phillips, Phys. Rev. E 77, 026702 (2008)] is mathematically transparent, elegant, and easily justified. Furthermore, the rigorous perturbation analysis used by Reis and Phillips does not require the reciprocals of the relaxation parameters to be small.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Survey of Intermediate Microeconomic Textbooks.
ERIC Educational Resources Information Center
Goulet, Janet C.
1986-01-01
Surveys nine undergraduate microeconomic theory textbooks comprising a representing sample those available. Criteria used were quantity and quality of examples, mathematical rigor, and level of abstraction. (JDH)
A Tool for Rethinking Teachers' Questioning
ERIC Educational Resources Information Center
Simpson, Amber; Mokalled, Stefani; Ellenburg, Lou Ann; Che, S. Megan
2014-01-01
In this article, the authors present a tool, the Cognitive Rigor Matrix (CRM; Hess et al. 2009), as a means to analyze and reflect on the type of questions posed by mathematics teachers. This tool is intended to promote and develop higher-order thinking and inquiry through the use of purposeful questions and mathematical tasks. The authors…
Oakland and San Francisco Create Course Pathways through Common Core Mathematics. White Paper
ERIC Educational Resources Information Center
Daro, Phil
2014-01-01
The Common Core State Standards for Mathematics (CCSS-M) set rigorous standards for each of grades 6, 7 and 8. Strategic Education Research Partnership (SERP) has been working with two school districts, Oakland Unified School District and San Francisco Unified School District, to evaluate extant policies and practices and formulate new policies…
BOOK REVIEW: Vortex Methods: Theory and Practice
NASA Astrophysics Data System (ADS)
Cottet, G.-H.; Koumoutsakos, P. D.
2001-03-01
The book Vortex Methods: Theory and Practice presents a comprehensive account of the numerical technique for solving fluid flow problems. It provides a very nice balance between the theoretical development and analysis of the various techniques and their practical implementation. In fact, the presentation of the rigorous mathematical analysis of these methods instills confidence in their implementation. The book goes into some detail on the more recent developments that attempt to account for viscous effects, in particular the presence of viscous boundary layers in some flows of interest. The presentation is very readable, with most points illustrated with well-chosen examples, some quite sophisticated. It is a very worthy reference book that should appeal to a large body of readers, from those interested in the mathematical analysis of the methods to practitioners of computational fluid dynamics. The use of the book as a text is compromised by its lack of exercises for students, but it could form the basis of a graduate special topics course. Juan Lopez
Safety Verification of the Small Aircraft Transportation System Concept of Operations
NASA Technical Reports Server (NTRS)
Carreno, Victor; Munoz, Cesar
2005-01-01
A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant.
From virtual clustering analysis to self-consistent clustering analysis: a mathematical study
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam
2018-03-01
In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.
NASA Astrophysics Data System (ADS)
Pereyra, Nicolas A.
2018-06-01
This book gives a rigorous yet 'physics-focused' introduction to mathematical logic that is geared towards natural science majors. We present the science major with a robust introduction to logic, focusing on the specific knowledge and skills that will unavoidably be needed in calculus topics and natural science topics in general (rather than taking a philosophical-math-fundamental oriented approach that is commonly found in mathematical logic textbooks).
13th Annual Systems Engineering Conference: Tues- Wed
2010-10-28
greater understanding/documentation of lessons learned – Promotes SE within the organization • Justification for continued funding of SE Infrastructure...educational process – Addresses the development of innovative learning tools, strategies, and teacher training • Research and Development – Promotes ...technology, and mathematics • More commitment to engaging young students in science, engineering, technology and mathematics • More rigor in defining
Discrete structures in continuum descriptions of defective crystals
2016-01-01
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of ‘plastic strain variables’, which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. PMID:27002070
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Sworder, Steven C.
2007-01-01
An experimental two-track intermediate algebra course was offered at Saddleback College, Mission Viejo, CA, between the Fall, 2002 and Fall, 2005 semesters. One track was modeled after the existing traditional California community college intermediate algebra course and the other track was a less rigorous intermediate algebra course in which the…
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
ERIC Educational Resources Information Center
Mattson, Beverly
2011-01-01
One of the competitive priorities of the U.S. Department of Education's Race to the Top applications addressed science, technology, engineering, and mathematics (STEM). States that applied were required to submit plans that addressed rigorous courses of study, cooperative partnerships to prepare and assist teachers in STEM content, and prepare…
Discrete structures in continuum descriptions of defective crystals.
Parry, G P
2016-04-28
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of 'plastic strain variables', which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. © 2016 The Author(s).
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
A Mathematical Framework for the Analysis of Cyber-Resilient Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M; Ferragut, Erik M; Laska, Jason A
2013-01-01
The increasingly recognized vulnerability of industrial control systems to cyber-attacks has inspired a considerable amount of research into techniques for cyber-resilient control systems. The majority of this effort involves the application of well known information security (IT) techniques to control system networks. While these efforts are important to protect the control systems that operate critical infrastructure, they are never perfectly effective. Little research has focused on the design of closed-loop dynamics that are resilient to cyber-attack. The majority of control system protection measures are concerned with how to prevent unauthorized access and protect data integrity. We believe that the abilitymore » to analyze how an attacker can effect the closed loop dynamics of a control system configuration once they have access is just as important to the overall security of a control system. To begin to analyze this problem, consistent mathematical definitions of concepts within resilient control need to be established so that a mathematical analysis of the vulnerabilities and resiliencies of a particular control system design methodology and configuration can be made. In this paper, we propose rigorous definitions for state awareness, operational normalcy, and resiliency as they relate to control systems. We will also discuss some mathematical consequences that arise from the proposed definitions. The goal is to begin to develop a mathematical framework and testable conditions for resiliency that can be used to build a sound theoretical foundation for resilient control research.« less
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
NASA Astrophysics Data System (ADS)
Calì, M.; Santarelli, M. G. L.; Leone, P.
Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.
Benson, Neil; van der Graaf, Piet H; Peletier, Lambertus A
2017-11-15
A key element of the drug discovery process is target selection. Although the topic is subject to much discussion and experimental effort, there are no defined quantitative rules around optimal selection. Often 'rules of thumb', that have not been subject to rigorous exploration, are used. In this paper we explore the 'rule of thumb' notion that the molecule that initiates a pathway signal is the optimal target. Given the multi-factorial and complex nature of this question, we have simplified an example pathway to its logical minimum of two steps and used a mathematical model of this to explore the different options in the context of typical small and large molecule drugs. In this paper, we report the conclusions of our analysis and describe the analysis tool and methods used. These provide a platform to enable a more extensive enquiry into this important topic. Copyright © 2017 Elsevier B.V. All rights reserved.
On Modeling and Analysis of MIMO Wireless Mesh Networks with Triangular Overlay Topology
Cao, Zhanmao; Wu, Chase Q.; Zhang, Yuanping; ...
2015-01-01
Multiple input multiple output (MIMO) wireless mesh networks (WMNs) aim to provide the last-mile broadband wireless access to the Internet. Along with the algorithmic development for WMNs, some fundamental mathematical problems also emerge in various aspects such as routing, scheduling, and channel assignment, all of which require an effective mathematical model and rigorous analysis of network properties. In this paper, we propose to employ Cartesian product of graphs (CPG) as a multichannel modeling approach and explore a set of unique properties of triangular WMNs. In each layer of CPG with a single channel, we design a node coordinate scheme thatmore » retains the symmetric property of triangular meshes and develop a function for the assignment of node identity numbers based on their coordinates. We also derive a necessary-sufficient condition for interference-free links and combinatorial formulas to determine the number of the shortest paths for channel realization in triangular WMNs.« less
Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate program. Inspired by the National Research Council's BIO2010 report, a new interdisciplinary first-year course (SCIE1000) was created, incorporating mathematics and computer programming in the context of modern science. In this study, the perceptions of biological science students enrolled in SCIE1000 in 2008 and 2009 are measured. Analysis indicates that, as a result of taking SCIE1000, biological science students gained a positive appreciation of the importance of mathematics in their discipline. However, the data revealed that SCIE1000 did not contribute positively to gains in appreciation for computing and only slightly influenced students' motivation to enroll in upper-level quantitative-based courses. Further comparisons between 2008 and 2009 demonstrated the positive effect of using genuine, real-world contexts to enhance student perceptions toward the relevance of mathematics. The results support the recommendation from BIO2010 that mathematics should be introduced to biology students in first-year courses using real-world examples, while challenging the benefits of introducing programming in first-year courses. PMID:20810961
34 CFR 691.16 - Rigorous secondary school program of study.
Code of Federal Regulations, 2010 CFR
2010-07-01
... MATHEMATICS ACCESS TO RETAIN TALENT GRANT (NATIONAL SMART GRANT) PROGRAMS Application Procedures § 691.16..., 2009. (Approved by the Office of Management and Budget under control number 1845-0078] (Authority: 20 U...
ERIC Educational Resources Information Center
Achieve, Inc., 2007
2007-01-01
At the request of the Hawaii Department of Education, Achieve conducted a study of Hawaii's 2005 grade 10 State Assessment in reading and mathematics. The study compared the content, rigor and passing (meets proficiency) scores on Hawaii's assessment with those of the six states that participated in Achieve's earlier study, "Do Graduation…
Which Kind of Mathematics for Quantum Mechanics? the Relevance of H. Weyl's Program of Research
NASA Astrophysics Data System (ADS)
Drago, Antonino
In 1918 Weyl's book Das Kontinuum planned to found anew mathematics upon more conservative bases than both rigorous mathematics and set theory. It gave birth to the so-called Weyl's elementary mathematics, i.e. an intermediate mathematics between the mathematics rejecting at all actual infinity and the classical one including it almost freely. The present paper scrutinises the subsequent Weyl's book Gruppentheorie und Quantenmechanik (1928) as a program for founding anew theoretical physics - through quantum theory - and at the same time developing his mathematics through an improvement of group theory; which, according to Weyl, is a mathematical theory effacing the old distinction between discrete and continuous mathematics. Evidence from Weyl's writings is collected for supporting this interpretation. Then Weyl's program is evaluated as unsuccessful, owing to some crucial difficulties of both physical and mathematical nature. The present clear-cut knowledge of Weyl's elementary mathematics allows us to re-evaluate Weyl's program in order to look for more adequate formulations of quantum mechanics in any weaker kind of mathematics than the classical one.
NASA Astrophysics Data System (ADS)
Šprlák, M.; Han, S.-C.; Featherstone, W. E.
2017-12-01
Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.
Complex dynamics of an SEIR epidemic model with saturated incidence rate and treatment
NASA Astrophysics Data System (ADS)
Khan, Muhammad Altaf; Khan, Yasir; Islam, Saeed
2018-03-01
In this paper, we describe the dynamics of an SEIR epidemic model with saturated incidence, treatment function, and optimal control. Rigorous mathematical results have been established for the model. The stability analysis of the model is investigated and found that the model is locally asymptotically stable when R0 < 1. The model is locally as well as globally asymptotically stable at endemic equilibrium when R0 > 1. The proposed model may possess a backward bifurcation. The optimal control problem is designed and obtained their necessary results. Numerical results have been presented for justification of theoretical results.
Structural efficiency studies of corrugated compression panels with curved caps and beaded webs
NASA Technical Reports Server (NTRS)
Davis, R. C.; Mills, C. T.; Prabhakaran, R.; Jackson, L. R.
1984-01-01
Curved cross-sectional elements are employed in structural concepts for minimum-mass compression panels. Corrugated panel concepts with curved caps and beaded webs are optimized by using a nonlinear mathematical programming procedure and a rigorous buckling analysis. These panel geometries are shown to have superior structural efficiencies compared with known concepts published in the literature. Fabrication of these efficient corrugation concepts became possible by advances made in the art of superplastically forming of metals. Results of the mass optimization studies of the concepts are presented as structural efficiency charts for axial compression.
Mathematical Analysis of a Coarsening Model with Local Interactions
NASA Astrophysics Data System (ADS)
Helmers, Michael; Niethammer, Barbara; Velázquez, Juan J. L.
2016-10-01
We consider particles on a one-dimensional lattice whose evolution is governed by nearest-neighbor interactions where particles that have reached size zero are removed from the system. Concentrating on configurations with infinitely many particles, we prove existence of solutions under a reasonable density assumption on the initial data and show that the vanishing of particles and the localized interactions can lead to non-uniqueness. Moreover, we provide a rigorous upper coarsening estimate and discuss generic statistical properties as well as some non-generic behavior of the evolution by means of heuristic arguments and numerical observations.
NASA Astrophysics Data System (ADS)
Popa, Alexandru
1998-08-01
Recently we have demonstrated in a mathematical paper the following property: The energy which results from the Schrödinger equation can be rigorously calculated by line integrals of analytical functions, if the Hamilton-Jacobi equation, written for the same system, is satisfied in the space of coordinates by a periodical trajectory. We present now an accurate analysis model of the conservative discrete systems, that is based on this property. The theory is checked for a lot of atomic systems. The experimental data, which are ionization energies, are taken from well known books.
Proteomics research to discover markers: what can we learn from Netflix?
Ransohoff, David F
2010-02-01
Research in the field of proteomics to discover markers for detection of cancer has produced disappointing results, with few markers gaining US Food and Drug Administration approval, and few claims borne out when subsequently tested in rigorous studies. What is the role of better mathematical or statistical analysis in improving the situation? This article examines whether a recent successful Netflix-sponsored competition using mathematical analysis to develop a prediction model for movie ratings of individual subscribers can serve to improve studies of markers in the field of proteomics. Netflix developed a database of movie preferences of individual subscribers using a longitudinal cohort research design. Groups of researchers then competed to develop better ways to analyze the data. Against this background, the strengths and weaknesses of research design are reviewed, contrasting the Netflix design with that of studies of biomarkers to detect cancer. Such biomarker studies generally have less-strong design, lower numbers of outcomes, and greater difficulty in even just measuring predictors and outcomes, so the fundamental data that will be used in mathematical analysis tend to be much weaker than in other kinds of research. If the fundamental data that will be analyzed are not strong, then better analytic methods have limited use in improving the situation. Recognition of this situation is an important first step toward improving the quality of clinical research about markers to detect cancer.
A Rigorous Geometric Derivation of the Chiral Anomaly in Curved Backgrounds
NASA Astrophysics Data System (ADS)
Bär, Christian; Strohmaier, Alexander
2016-11-01
We discuss the chiral anomaly for a Weyl field in a curved background and show that a novel index theorem for the Lorentzian Dirac operator can be applied to describe the gravitational chiral anomaly. A formula for the total charge generated by the gravitational and gauge field background is derived directly in Lorentzian signature and in a mathematically rigorous manner. It contains a term identical to the integrand in the Atiyah-Singer index theorem and another term involving the {η}-invariant of the Cauchy hypersurfaces.
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
A simple model for indentation creep
NASA Astrophysics Data System (ADS)
Ginder, Ryan S.; Nix, William D.; Pharr, George M.
2018-03-01
A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yurkin, Maxim A.
2017-01-01
Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.
Bardhan, Jaydeep P; Knepley, Matthew G
2011-09-28
We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics
Intelligent control of a planning system for astronaut training.
Ortiz, J; Chen, G
1999-07-01
This work intends to design, analyze and solve, from the systems control perspective, a complex, dynamic, and multiconstrained planning system for generating training plans for crew members of the NASA-led International Space Station. Various intelligent planning systems have been developed within the framework of artificial intelligence. These planning systems generally lack a rigorous mathematical formalism to allow a reliable and flexible methodology for their design, modeling, and performance analysis in a dynamical, time-critical, and multiconstrained environment. Formulating the planning problem in the domain of discrete-event systems under a unified framework such that it can be modeled, designed, and analyzed as a control system will provide a self-contained theory for such planning systems. This will also provide a means to certify various planning systems for operations in the dynamical and complex environments in space. The work presented here completes the design, development, and analysis of an intricate, large-scale, and representative mathematical formulation for intelligent control of a real planning system for Space Station crew training. This planning system has been tested and used at NASA-Johnson Space Center.
Symmetry Properties of Potentiometric Titration Curves.
ERIC Educational Resources Information Center
Macca, Carlo; Bombi, G. Giorgio
1983-01-01
Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)
The KP Approximation Under a Weak Coriolis Forcing
NASA Astrophysics Data System (ADS)
Melinand, Benjamin
2018-02-01
In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
Dense module enumeration in biological networks
NASA Astrophysics Data System (ADS)
Tsuda, Koji; Georgii, Elisabeth
2009-12-01
Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.
Geodesics in nonexpanding impulsive gravitational waves with Λ. II
NASA Astrophysics Data System (ADS)
Sämann, Clemens; Steinbauer, Roland
2017-11-01
We investigate all geodesics in the entire class of nonexpanding impulsive gravitational waves propagating in an (anti-)de Sitter universe using the distributional metric. We extend the regularization approach of part I [Sämann, C. et al., Classical Quantum Gravity 33(11), 115002 (2016)] to a full nonlinear distributional analysis within the geometric theory of generalized functions. We prove global existence and uniqueness of geodesics that cross the impulsive wave and hence geodesic completeness in full generality for this class of low regularity spacetimes. This, in particular, prepares the ground for a mathematically rigorous account on the "physical equivalence" of the continuous form with the distributional "form" of the metric.
Technical, analytical and computer support
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.
On the Wind Generation of Water Waves
NASA Astrophysics Data System (ADS)
Bühler, Oliver; Shatah, Jalal; Walsh, Samuel; Zeng, Chongchun
2016-11-01
In this work, we consider the mathematical theory of wind generated water waves. This entails determining the stability properties of the family of laminar flow solutions to the two-phase interface Euler equation. We present a rigorous derivation of the linearized evolution equations about an arbitrary steady solution, and, using this, we give a complete proof of the instability criterion of M iles [16]. Our analysis is valid even in the presence of surface tension and a vortex sheet (discontinuity in the tangential velocity across the air-sea interface). We are thus able to give a unified equation connecting the Kelvin-Helmholtz and quasi-laminar models of wave generation.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Gordon, M. J. C.
2015-01-01
Robin Milner's paper, ‘The use of machines to assist in rigorous proof’, introduces methods for automating mathematical reasoning that are a milestone in the development of computer-assisted theorem proving. His ideas, particularly his theory of tactics, revolutionized the architecture of proof assistants. His methodology for automating rigorous proof soundly, particularly his theory of type polymorphism in programing, led to major contributions to the theory and design of programing languages. His citation for the 1991 ACM A.M. Turing award, the most prestigious award in computer science, credits him with, among other achievements, ‘probably the first theoretically based yet practical tool for machine assisted proof construction’. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750147
Towards a Unified Theory of Engineering Education
ERIC Educational Resources Information Center
Salcedo Orozco, Oscar H.
2017-01-01
STEM education is an interdisciplinary approach to learning where rigorous academic concepts are coupled with real-world lessons and activities as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling STEM literacy (Tsupros, Kohler and…
Evaluation, Instruction and Policy Making. IIEP Seminar Paper: 9.
ERIC Educational Resources Information Center
Bloom, Benjamin S.
Recently, educational evaluation has attempted to use the precision, objectivity, and mathematical rigor of the psychological measurement field as well as to find ways in which instrumentation and data utilization could more directly be related to educational institutions, educational processes, and educational purposes. The linkages between…
Methods in Symbolic Computation and p-Adic Valuations of Polynomials
NASA Astrophysics Data System (ADS)
Guan, Xiao
Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.
Mathematical methods for protein science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.; Istrail, S.; Atkins, J.
1997-12-31
Understanding the structure and function of proteins is a fundamental endeavor in molecular biology. Currently, over 100,000 protein sequences have been determined by experimental methods. The three dimensional structure of the protein determines its function, but there are currently less than 4,000 structures known to atomic resolution. Accordingly, techniques to predict protein structure from sequence have an important role in aiding the understanding of the Genome and the effects of mutations in genetic disease. The authors describe current efforts at Sandia to better understand the structure of proteins through rigorous mathematical analyses of simple lattice models. The efforts have focusedmore » on two aspects of protein science: mathematical structure prediction, and inverse protein folding.« less
MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.
Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan
2017-01-01
Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.
Separating intrinsic from extrinsic fluctuations in dynamic biological systems
Paulsson, Johan
2011-01-01
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172
Separating intrinsic from extrinsic fluctuations in dynamic biological systems.
Hilfinger, Andreas; Paulsson, Johan
2011-07-19
From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.
The Markov process admits a consistent steady-state thermodynamic formalism
NASA Astrophysics Data System (ADS)
Peng, Liangrong; Zhu, Yi; Hong, Liu
2018-01-01
The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsoulakis, Markos
2014-08-09
Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less
Solving America's Math Problem
ERIC Educational Resources Information Center
Vigdor, Jacob
2013-01-01
Concern about students' math achievement is nothing new, and debates about the mathematical training of the nation's youth date back a century or more. In the early 20th century, American high-school students were starkly divided, with rigorous math courses restricted to a college-bound elite. At midcentury, the "new math" movement sought,…
2003-09-29
NanoTechnology and Metallurgy Belgrade 11000 Yugoslavia 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR/MONITOR’S ACRONYM(S)9...outlet annular tube I - ZONE I II - ZONE II 39 References: 1. Tayo Kaken Company, A means of reactivating worked charcoal , Japanese
A Novel Approach to Physiology Education for Biomedical Engineering Students
ERIC Educational Resources Information Center
DiCecco, J.; Wu, J.; Kuwasawa, K.; Sun, Y.
2007-01-01
It is challenging for biomedical engineering programs to incorporate an indepth study of the systemic interdependence of cells, tissues, and organs into the rigorous mathematical curriculum that is the cornerstone of engineering education. To be sure, many biomedical engineering programs require their students to enroll in anatomy and physiology…
ERIC Educational Resources Information Center
Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.
2011-01-01
The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…
A Transformative Model for Undergraduate Quantitative Biology Education
ERIC Educational Resources Information Center
Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…
Exploring in Aeronautics. An Introduction to Aeronautical Sciences.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Cleveland, OH. Lewis Research Center.
This curriculum guide is based on a year of lectures and projects of a contemporary special-interest Explorer program intended to provide career guidance and motivation for promising students interested in aerospace engineering and scientific professions. The adult-oriented program avoids technicality and rigorous mathematics and stresses real…
Virginia's College and Career Readiness Initiative
ERIC Educational Resources Information Center
Virginia Department of Education, 2010
2010-01-01
In 1995, Virginia began a broad educational reform program that resulted in revised, rigorous content standards, the Virginia Standards of Learning (SOL), in the content areas of English, mathematics, science, and history and social science. These grade-by-grade and course-based standards were developed over 14 months with revision teams including…
Math Exchanges: Guiding Young Mathematicians in Small-Group Meetings
ERIC Educational Resources Information Center
Wedekind, Kassia Omohundro
2011-01-01
Traditionally, small-group math instruction has been used as a format for reaching children who struggle to understand. Math coach Kassia Omohundro Wedekind uses small-group instruction as the centerpiece of her math workshop approach, engaging all students in rigorous "math exchanges." The key characteristics of these mathematical conversations…
Zoos, Aquariums, and Expanding Students' Data Literacy
ERIC Educational Resources Information Center
Mokros, Jan; Wright, Tracey
2009-01-01
Zoo and aquarium educators are increasingly providing educationally rigorous programs that connect their animal collections with curriculum standards in mathematics as well as science. Partnering with zoos and aquariums is a powerful way for teachers to provide students with more opportunities to observe, collect, and analyze scientific data. This…
Waveform generation in the EETS
NASA Astrophysics Data System (ADS)
Wilshire, J. P.
1985-05-01
Design decisions and analysis for the waveform generation portion of an electrical equipment test set are discussed. This test set is unlike conventional ATE in that it is portable and designed to operate in forward area sites for the USMC. It is also unique in that it provides for functional testing for 32 electronic units from the AV-88 Harrier II aircraft. Specific requirements for the waveform generator are discussed, including a wide frequency range, high resolution and accuracy, and low total harmonic distortion. Several approaches to meet these requirements are considered and a specific concept is presented in detail, which consists of a digitally produced waveform that feeds a deglitched analog conversion circuit. Rigorous mathematical analysis is presented to prove that this concept meets the requirements. Finally, design alternatives and enhancements are considered.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
NASA Astrophysics Data System (ADS)
Sussman, Joshua Michael
This three-paper dissertation explores problems with the use of standardized tests as outcome measures for the evaluation of instructional interventions in mathematics and science. Investigators commonly use students' scores on standardized tests to evaluate the impact of instructional programs designed to improve student achievement. However, evidence suggests that the standardized tests may not measure, or may not measure well, the student learning caused by the interventions. This problem is special case of a basic problem in applied measurement related to understanding whether a particular test provides accurate and useful information about the impact of an educational intervention. The three papers explore different aspects of the issue and highlight the potential benefits of (a) using particular research methods and of (b) implementing changes to educational policy that would strengthen efforts to reform instructional intervention in mathematics and science. The first paper investigates measurement problems related to the use of standardized tests in applied educational research. Analysis of the research projects funded by the Institute of Education Sciences (IES) Mathematics and Science Education Program permitted me to address three main research questions. One, how often are standardized tests used to evaluate new educational interventions? Two, do the tests appear to measure the same thing that the intervention teaches? Three, do investigators establish validity evidence for the specific uses of the test? The research documents potential problems and actual problems related to the use of standardized tests in leading applied research, and suggests changes to policy that would address measurement issues and improve the rigor of applied educational research. The second paper explores the practical consequences of misalignment between an outcome measure and an educational intervention in the context of summative evaluation. Simulated evaluation data and a psychometric model of alignment grounded in item response modeling generate the results that address the following research question: how do differences between what a test measures and what an intervention teaches influence the results of an evaluation? The simulation derives a functional relationship between alignment, defined as the match between the test and the intervention, and treatment sensitivity, defined as the statistical power for detecting the impact of an intervention. The paper presents a new model of the effect of misalignment on the results of an evaluation and recommendations for outcome measure selection. The third paper documents the educational effectiveness of the Learning Mathematics through Representations (LMR) lesson sequence for students classified as English Learners (ELs). LMR is a research-based curricular unit designed to support upper elementary students' understandings of integers and fractions, areas considered foundational for the development of higher mathematics. The experimental evaluation contains a multilevel analysis of achievement data from two assessments: a standardized test and a researcher-developed assessment. The study coordinates the two sources of research data with a theoretical mechanism of action in order to rigorously document the effectiveness and educational equity of LMR for ELs using multiple sources of information.
Mathematics make microbes beautiful, beneficial, and bountiful.
Jungck, John R
2012-01-01
Microbiology is a rich area for visualizing the importance of mathematics in terms of designing experiments, data mining, testing hypotheses, and visualizing relationships. Historically, Nobel Prizes have acknowledged the close interplay between mathematics and microbiology in such examples as the fluctuation test and mutation rates using Poisson statistics by Luria and Delbrück and the use of graph theory of polyhedra by Caspar and Klug. More and more contemporary microbiology journals feature mathematical models, computational algorithms and heuristics, and multidimensional visualizations. While revolutions in research have driven these initiatives, a commensurate effort needs to be made to incorporate much more mathematics into the professional preparation of microbiologists. In order not to be daunting to many educators, a Bloom-like "Taxonomy of Quantitative Reasoning" is shared with explicit examples of microbiological activities for engaging students in (a) counting, measuring, calculating using image analysis of bacterial colonies and viral infections on variegated leaves, measurement of fractal dimensions of beautiful colony morphologies, and counting vertices, edges, and faces on viral capsids and using graph theory to understand self assembly; (b) graphing, mapping, ordering by applying linear, exponential, and logistic growth models of public health and sanitation problems, revisiting Snow's epidemiological map of cholera with computational geometry, and using interval graphs to do complementation mapping, deletion mapping, food webs, and microarray heatmaps; (c) problem solving by doing gene mapping and experimental design, and applying Boolean algebra to gene regulation of operons; (d) analysis of the "Bacterial Bonanza" of microbial sequence and genomic data using bioinformatics and phylogenetics; (e) hypothesis testing-again with phylogenetic trees and use of Poisson statistics and the Luria-Delbrück fluctuation test; and (f) modeling of biodiversity by using game theory, of epidemics with algebraic models, bacterial motion by using motion picture analysis and fluid mechanics of motility in multiple dimensions through the physics of "Life at Low Reynolds Numbers," and pattern formation of quorum sensing bacterial populations. Through a developmental model for preprofessional education that emphasizes the beauty, utility, and diversity of microbiological systems, we hope to foster creativity as well as mathematically rigorous reasoning. Copyright © 2012 Elsevier Inc. All rights reserved.
Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas
NASA Technical Reports Server (NTRS)
Baty, Roy S.; Farassat, Fereidoun; Hargreaves, John
2007-01-01
Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.
On making cuts for magnetic scalar potentials in multiply connected regions
NASA Astrophysics Data System (ADS)
Kotiuga, P. R.
1987-04-01
The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.
Collisional damping rates for plasma waves
NASA Astrophysics Data System (ADS)
Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.
2016-06-01
The distinction between the plasma dynamics dominated by collisional transport versus collective processes has never been rigorously addressed until recently. A recent paper [P. H. Yoon et al., Phys. Rev. E 93, 033203 (2016)] formulates for the first time, a unified kinetic theory in which collective processes and collisional dynamics are systematically incorporated from first principles. One of the outcomes of such a formalism is the rigorous derivation of collisional damping rates for Langmuir and ion-acoustic waves, which can be contrasted to the heuristic customary approach. However, the results are given only in formal mathematical expressions. The present brief communication numerically evaluates the rigorous collisional damping rates by considering the case of plasma particles with Maxwellian velocity distribution function so as to assess the consequence of the rigorous formalism in a quantitative manner. Comparison with the heuristic ("Spitzer") formula shows that the accurate damping rates are much lower in magnitude than the conventional expression, which implies that the traditional approach over-estimates the importance of attenuation of plasma waves by collisional relaxation process. Such a finding may have a wide applicability ranging from laboratory to space and astrophysical plasmas.
A general panel sizing computer code and its application to composite structural panels
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Stroud, W. J.
1978-01-01
A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.
Rigorous approaches to tether dynamics in deployment and retrieval
NASA Technical Reports Server (NTRS)
Antona, Ettore
1987-01-01
Dynamics of tethers in a linearized analysis can be considered as the superposition of propagating waves. This approach permits a new way for the analysis of tether behavior during deployment and retrieval, where a tether is composed by a part at rest and a part subjected to propagation phenomena, with the separating section depending on time. The dependence on time of the separating section requires the analysis of the reflection of the waves travelling toward the part at rest. Such a reflection generates a reflected wave, whose characteristics are determined. The propagation phenomena of major interest in a tether are transverse waves and longitudinal waves, all mathematically modelled by the vibrating chord equations, if the tension is considered constant along the tether. An interesting problem also considered is concerned with the dependence of the tether tension from the longitudinal position, due to microgravity, and the influence of this dependence on the propagation waves.
Test Anxiety and the Curriculum: The Subject Matters.
ERIC Educational Resources Information Center
Everson, Howard T.; And Others
College students' self-reported test anxiety levels in English, mathematics, physical science, and social science were compared to develop empirical support for the claim that students, in general, are more anxious about tests in rigorous academic subjects than in the humanities and to understand the curriculum-related sources of anxiety. It was…
The Art of Learning: A Guide to Outstanding North Carolina Arts in Education Programs.
ERIC Educational Resources Information Center
Herman, Miriam L.
The Arts in Education programs delineated in this guide complement the rigorous arts curriculum taught by arts specialists in North Carolina schools and enable students to experience the joy of the creative process while reinforcing learning in other curricula: language arts, mathematics, social studies, science, and physical education. Programs…
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
High Standards Help Struggling Students: New Evidence. Charts You Can Trust
ERIC Educational Resources Information Center
Clark, Constance; Cookson, Peter W., Jr.
2012-01-01
The Common Core State Standards, adopted by 46 states and the District of Columbia, promise to raise achievement in English and mathematics through rigorous standards that promote deeper learning. But while most policymakers, researchers, and educators have embraced these higher standards, some question the fairness of raising the academic bar on…
Improving Mathematical Problem Solving in Grades 4 through 8. IES Practice Guide. NCEE 2012-4055
ERIC Educational Resources Information Center
Woodward, John; Beckmann, Sybilla; Driscoll, Mark; Franke, Megan; Herzig, Patricia; Jitendra, Asha; Koedinger, Kenneth R.; Ogbuehi, Philip
2012-01-01
The Institute of Education Sciences (IES) publishes practice guides in education to bring the best available evidence and expertise to bear on current challenges in education. Authors of practice guides combine their expertise with the findings of rigorous research, when available, to develop specific recommendations for addressing these…
NASA Technical Reports Server (NTRS)
Thomas-Keprta, Kathie L.; Clemett, Simon J.; Bazylinski, Dennis A.; Kirschvink, Joseph L.; McKay, David S.; Wentworth, Susan J.; Vali, H.; Gibson, Everett K.
2000-01-01
Here we use rigorous mathematical modeling to compare ALH84001 prismatic magnetites with those produced by terrestrial magnetotactic bacteria, MV-1. We find that this subset of the Martian magnetites appears to be statistically indistinguishable from those of MV-1.
Shaping Social Work Science: What Should Quantitative Researchers Do?
ERIC Educational Resources Information Center
Guo, Shenyang
2015-01-01
Based on a review of economists' debates on mathematical economics, this article discusses a key issue for shaping the science of social work--research methodology. The article describes three important tasks quantitative researchers need to fulfill in order to enhance the scientific rigor of social work research. First, to test theories using…
Louis Guttman's Contributions to Classical Test Theory
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.; Zumbo, Bruno D.; Ross, Donald
2005-01-01
This article focuses on Louis Guttman's contributions to the classical theory of educational and psychological tests, one of the lesser known of his many contributions to quantitative methods in the social sciences. Guttman's work in this field provided a rigorous mathematical basis for ideas that, for many decades after Spearman's initial work,…
ERIC Educational Resources Information Center
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate…
State College- and Career-Ready High School Graduation Requirements. Updated
ERIC Educational Resources Information Center
Achieve, Inc., 2013
2013-01-01
Research by Achieve, ACT, and others suggests that for high school graduates to be prepared for success in a wide range of postsecondary settings, they need to take four years of challenging mathematics--covering Advanced Algebra; Geometry; and data, probability, and statistics content--and four years of rigorous English aligned with college- and…
2006-12-01
DISTRIBUTION STATEMENT. ________//signature//________________ ________//signature//________________ PATRICK D. SULLIVAN, Ph.D., P.E. SANDRA R ...adsorber, at r =1.24 cm: (a) gas phase; (b) solid phase..................................................................................... 30 46 The...34 57 Axial profiles of the gas velocity during adsorption in the 2-cartridge adsorber at r =1.25cm..... 34 60
Mathematics Awareness through Technology, Teamwork, Engagement, and Rigor
ERIC Educational Resources Information Center
James, Laurie
2016-01-01
The purpose of this two-year observational study was to determine if the use of technology and intervention groups affected fourth-grade math scores. Specifically, the desire was to identify the percentage of students who met or exceeded grade-level standards on the state standardized test. This study indicated possible reasons that enhanced…
ERIC Educational Resources Information Center
McEvoy, Suzanne
2012-01-01
With the changing U.S. demographics, higher numbers of diverse, low-income, first-generation students are underprepared for the academic rigors of four-year institutions oftentimes requiring assistance, and remedial and/or developmental coursework in English and mathematics. Without intervention approaches these students are at high risk for…
ERIC Educational Resources Information Center
Ashley, Michael; Cooper, Katelyn M.; Cala, Jacqueline M.; Brownell, Sara E.
2017-01-01
Summer bridge programs are designed to help transition students into the college learning environment. Increasingly, bridge programs are being developed in science, technology, engineering, and mathematics (STEM) disciplines because of the rigorous content and lower student persistence in college STEM compared with other disciplines. However, to…
Visualizing, Rather than Deriving, Russell-Saunders Terms: A Classroom Activity with Quantum Numbers
ERIC Educational Resources Information Center
Coppo, Paolo
2016-01-01
A 1 h classroom activity is presented, aimed at consolidating the concepts of microstates and Russell-Saunders energy terms in transition metal atoms and coordination complexes. The unconventional approach, based on logic and intuition rather than rigorous mathematics, is designed to stimulate discussion and enhance familiarity with quantum…
Single toxin dose-response models revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less
On the mathematical treatment of the Born-Oppenheimer approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr
2014-05-15
Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common usemore » of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.« less
Jitendra, Asha K; Petersen-Brown, Shawna; Lein, Amy E; Zaslofsky, Anne F; Kunkel, Amy K; Jung, Pyung-Gang; Egan, Andrea M
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et al. and 10 single case design (SCD) research studies using criteria suggested by Horner et al. and the What Works Clearinghouse. Results indicated that 14 group design studies met the criteria for high-quality or acceptable research, whereas SCD studies did not meet the standards for an evidence-based practice. Based on these findings, strategy instruction priming the mathematics problem structure is considered an evidence-based practice using only group design methodological criteria. Implications for future research and for practice are discussed. © Hammill Institute on Disabilities 2013.
Manpower Substitution and Productivity in Medical Practice
Reinhardt, Uwe E.
1973-01-01
Probably in response to the often alleged physician shortage in this country, concerted research efforts are under way to identify technically feasible opportunities for manpower substitution in the production of ambulatory health care. The approaches range from descriptive studies of the effect of task delegation on output of medical services to rigorous mathematical modeling of health care production by means of linear or continuous production functions. In this article the distinct methodological approaches underlying mathematical models are presented in synopsis, and their inherent strengths and weaknesses are contrasted. The discussion includes suggestions for future research directions. Images Fig. 2 PMID:4586735
Fast determination of structurally cohesive subgroups in large networks
Sinkovits, Robert S.; Moody, James; Oztan, B. Tolga; White, Douglas R.
2016-01-01
Structurally cohesive subgroups are a powerful and mathematically rigorous way to characterize network robustness. Their strength lies in the ability to detect strong connections among vertices that not only have no neighbors in common, but that may be distantly separated in the graph. Unfortunately, identifying cohesive subgroups is a computationally intensive problem, which has limited empirical assessments of cohesion to relatively small graphs of at most a few thousand vertices. We describe here an approach that exploits the properties of cliques, k-cores and vertex separators to iteratively reduce the complexity of the graph to the point where standard algorithms can be used to complete the analysis. As a proof of principle, we apply our method to the cohesion analysis of a 29,462-vertex biconnected component extracted from a 128,151-vertex co-authorship data set. PMID:28503215
An analysis of the coexistence of two host species with a shared pathogen.
Chen, Zhi-Min; Price, W G
2008-06-01
Population dynamics of two-host species under direct transmission of an infectious disease or a pathogen is studied based on the Holt-Pickering mathematical model, which accounts for the influence of the pathogen on the population of the two-host species. Through rigorous analysis and a numerical scheme of study, circumstances are specified under which the shared pathogen leads to the coexistence of the two-host species in either a persistent or periodic form. This study shows the importance of intrinsic growth rates or the differences between birth rates and death rates of the two host susceptible in controlling these circumstances. It is also demonstrated that the periodicity may arise when the positive intrinsic growth rates are very small, but the periodicity is very weak which may not be observed in an empirical investigation.
Adjoint equations and analysis of complex systems: Application to virus infection modelling
NASA Astrophysics Data System (ADS)
Marchuk, G. I.; Shutyaev, V.; Bocharov, G.
2005-12-01
Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.
Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I
2015-01-01
High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Bovier, Anton
2006-06-01
Our mathematical understanding of the statistical mechanics of disordered systems is going through a period of stunning progress. This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, recent progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail. Comprehensive introduction to an active and fascinating area of research Clear exposition that builds to the state of the art in the mathematics of spin glasses Written by a well-known and active researcher in the field
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Roschelle, Jeremy; Murphy, Robert; Feng, Mingyu; Bakia, Marianne
2017-01-01
In a rigorous evaluation of ASSISTments as an online homework support conducted in the state of Maine, SRI International reported that "the intervention significantly increased student scores on an end-of-the-year standardized mathematics assessment as compared with a control group that continued with existing homework practices."…
A Curricular-Sampling Approach to Progress Monitoring: Mathematics Concepts and Applications
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Fuchs, Douglas; Zumeta, Rebecca O.
2008-01-01
Progress monitoring is an important component of effective instructional practice. Curriculum-based measurement (CBM) is a form of progress monitoring that has been the focus of rigorous research. Two approaches for formulating CBM systems exist. The first is to assess performance regularly on a task that serves as a global indicator of competence…
ERIC Educational Resources Information Center
HARDWICK, ARTHUR LEE
AT THIS WORKSHOP OF INDUSTRIAL REPRESENTATIVE AND TECHNICAL EDUCATORS, A TECHNICIAN WAS DEFINED AS ONE WITH BROAD-BASED MATHEMATICAL AND SCIENTIFIC TRAINING AND WITH COMPETENCE TO SUPPORT PROFESSIONAL SYSTEMS, ENGINEERING, AND OTHER SCIENTIFIC PERSONNEL. HE SHOULD RECEIVE A RIGOROUS, 2-YEAR, POST SECONDARY EDUCATION ESPECIALLY DESIGNED FOR HIS…
What Can Graph Theory Tell Us about Word Learning and Lexical Retrieval?
ERIC Educational Resources Information Center
Vitevitch, Michael S.
2008-01-01
Purpose: Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of…
ERIC Educational Resources Information Center
Stone, James R., III; Alfeld, Corinne; Pearson, Donna
2008-01-01
Numerous high school students, including many who are enrolled in career and technical education (CTE) courses, do not have the math skills necessary for today's high-skill workplace or college entrance requirements. This study tests a model for enhancing mathematics instruction in five high school CTE programs (agriculture, auto technology,…
Slow off the Mark: Elementary School Teachers and the Crisis in STEM Education
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
Prospective teachers can typically obtain a license to teach elementary school without taking a rigorous college-level STEM class such as calculus, statistics, or chemistry, and without demonstrating a solid grasp of mathematics knowledge, scientific knowledge, or the nature of scientific inquiry. This is not a recipe for ensuring students have…
ERIC Educational Resources Information Center
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
Integrated model development for liquid fueled rocket propulsion systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
As detailed in the original statement of work, the objective of phase two of this research effort was to develop a general framework for rocket engine performance prediction that integrates physical principles, a rigorous mathematical formalism, component level test data, system level test data, and theory-observation reconciliation. Specific phase two development tasks are defined.
High School Graduation Requirements in a Time of College and Career Readiness. CSAI Report
ERIC Educational Resources Information Center
Center on Standards and Assessments Implementation, 2016
2016-01-01
Ensuring that students graduate high school prepared for college and careers has become a national priority in the last decade. To support this goal, states have adopted rigorous college and career readiness (CCR) standards in English language arts (ELA) and mathematics. Additionally, states have begun to require students to pass assessments, in…
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
Using Teacher Evaluation Reform and Professional Development to Support Common Core Assessments
ERIC Educational Resources Information Center
Youngs, Peter
2013-01-01
The Common Core State Standards Initiative, in its aim to align diverse state curricula and improve educational outcomes, calls for K-12 teachers in the United States to engage all students in mathematical problem solving along with reading and writing complex text through the use of rigorous academic content. Until recently, most teacher…
The Hard Sciences and the Soft: Some Sociological Observations *
Storer, Norman W.
1967-01-01
The paper focuses on the implications of the terms “hard” and “soft” as they are used to characterize different branches of science; this is one approach to understanding some of the relations between knowledge and social organization. Given the importance to scientists of having their work evaluated accurately, it can be seen that the more rigorously a body of knowledge is organized, the more readily professional recognition can be appropriately assigned. The degree of rigor seems directly related to the extent to which mathematics is used in a science, and it is this that makes a science “hard.” Data are presented in support of the hypothesis that “harder” sciences are characterized by more impersonality in their members' relationships where impersonality is indexed by the frequency that only first initials are used in footnotes. Finally, some parallels between the economic and the scientific sectors of society are suggested, viewing money and professional recognition as “generalized media” and noting certain analogies in science to inflation and deflation in the economic system. Implications for the obsolescence of parts of the literature of science are discussed, and the relevance of this analysis to Kuhn's work on scientific revolutions is briefly noted. PMID:6016373
Topological Isomorphisms of Human Brain and Financial Market Networks
Vértes, Petra E.; Nicol, Ruth M.; Chapman, Sandra C.; Watkins, Nicholas W.; Robertson, Duncan A.; Bullmore, Edward T.
2011-01-01
Although metaphorical and conceptual connections between the human brain and the financial markets have often been drawn, rigorous physical or mathematical underpinnings of this analogy remain largely unexplored. Here, we apply a statistical and graph theoretic approach to the study of two datasets – the time series of 90 stocks from the New York stock exchange over a 3-year period, and the fMRI-derived time series acquired from 90 brain regions over the course of a 10-min-long functional MRI scan of resting brain function in healthy volunteers. Despite the many obvious substantive differences between these two datasets, graphical analysis demonstrated striking commonalities in terms of global network topological properties. Both the human brain and the market networks were non-random, small-world, modular, hierarchical systems with fat-tailed degree distributions indicating the presence of highly connected hubs. These properties could not be trivially explained by the univariate time series statistics of stock price returns. This degree of topological isomorphism suggests that brains and markets can be regarded broadly as members of the same family of networks. The two systems, however, were not topologically identical. The financial market was more efficient and more modular – more highly optimized for information processing – than the brain networks; but also less robust to systemic disintegration as a result of hub deletion. We conclude that the conceptual connections between brains and markets are not merely metaphorical; rather these two information processing systems can be rigorously compared in the same mathematical language and turn out often to share important topological properties in common to some degree. There will be interesting scientific arbitrage opportunities in further work at the graph-theoretically mediated interface between systems neuroscience and the statistical physics of financial markets. PMID:22007161
Consistent Chemical Mechanism from Collaborative Data Processing
Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...
2016-04-01
Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less
NASA Astrophysics Data System (ADS)
Zhukotsky, Alexander V.; Kogan, Emmanuil M.; Kopylov, Victor F.; Marchenko, Oleg V.; Lomakin, O. A.
1994-07-01
A new method for morphodensitometric analysis of blood cells was applied for medically screening some ecological influence and infection pathologies. A complex algorithm of computational image processing was created for supra molecular restructurings of interphase chromatin of lymphocytes research. It includes specific methods of staining and unifies different quantitative analysis methods. Our experience with the use of a television image analyzer in cytological and immunological studies made it possible to carry out some research in morphometric analysis of chromatin structure in interphase lymphocyte nuclei in genetic and virus pathologies. In our study to characterize lymphocytes as an image-forming system by a rigorous mathematical description we used an approach involving contaminant evaluation of the topography of chromatin network intact and victims' lymphocytes. It is also possible to digitize data, which revealed significant distinctions between control and experiment. The method allows us to observe the minute structural changes in chromatin, especially eu- and hetero-chromatin that were previously studied by genetics only in chromosomes.
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
Reference condition approach to restoration planning
Nestler, J.M.; Theiling, C.H.; Lubinski, S.J.; Smith, D.L.
2010-01-01
Ecosystem restoration planning requires quantitative rigor to evaluate alternatives, define end states, report progress and perform environmental benefits analysis (EBA). Unfortunately, existing planning frameworks are, at best, semi-quantitative. In this paper, we: (1) describe a quantitative restoration planning approach based on a comprehensive, but simple mathematical framework that can be used to effectively apply knowledge and evaluate alternatives, (2) use the approach to derive a simple but precisely defined lexicon based on the reference condition concept and allied terms and (3) illustrate the approach with an example from the Upper Mississippi River System (UMRS) using hydrologic indicators. The approach supports the development of a scaleable restoration strategy that, in theory, can be expanded to ecosystem characteristics such as hydraulics, geomorphology, habitat and biodiversity. We identify three reference condition types, best achievable condition (A BAC), measured magnitude (MMi which can be determined at one or many times and places) and desired future condition (ADFC) that, when used with the mathematical framework, provide a complete system of accounts useful for goal-oriented system-level management and restoration. Published in 2010 by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ogungbemi, Kayode; Han, Xianming; Blosser, Micheal; Misra, Prabhakar; LASER Spectroscopy Group Collaboration
2014-03-01
Optogalvanic transitions have been recorded and fitted for 1s5 - 2p7\\ (621.7 nm), 1s5 - 2p8 (633.4 nm) and 1s5 - 2p9 (640.2 nm) transitions of neon in a Fe-Ne hollow cathode plasma discharge as a function of current (2-19 mA) and time evolution (0-50 microsec). The optogalvanic waveforms have been fitted to a Monte carlo mathematical model. The variation in the excited population of neon is governed by the rate of collision of the atoms involving the common metastable state (1s5) for the three transitions investigated. The concomitant changes in amplitudes and intensities of the optogalvanic signal waveforms associated with these transitions have been studied rigorously and the fitted parameters obtained using the Monte Carlo algorithm to help better understand the physics of the hollow cathode discharge. Thanks to Laser Spectroscopy group in Physics and Astronomy Dept. Howard University Washington DC.
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V.
2009-12-01
Bush Physics for the 21st Century brings physics that is culturally connected, engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is currently delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Practical, culturally relevant kinetic examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiments are delivered to the students with the exercises completed on-line. In addition, students are teamed and required to perform a much more involved experimental study with the results presented by teams at the conclusion of the course. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. Greek symbols are traditionally used in equations, however, to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the concepts we have introduced Inuktitut symbols to complement the traditional Greek symbols. Results and observations from the first two pilot semesters (spring 2008 and 2009) will be presented.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V. K.
2010-12-01
Bush Physics for the 21st Century brings physics that is engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska where the opportunity to take a physics course has been nearly nonexistent. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Kinetic, practical and culturally relevant place-based examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiment kits are shipped to the students. In addition students conduct a Collaborative Research Experiment where they coordinate times of sun angle measurements with teams in other villages to determine their latitude and longitude as well as an estimate of the circumference of the earth. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. We introduce Inuktitut symbols to complement the traditional Greek symbols in equations to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the physical concepts. Results and observations from the first three pilot semesters (spring 2008, 2009 and 2010) will be presented.
A Mathematical Account of the NEGF Formalism
NASA Astrophysics Data System (ADS)
Cornean, Horia D.; Moldoveanu, Valeriu; Pillet, Claude-Alain
2018-02-01
The main goal of this paper is to put on solid mathematical grounds the so-called Non-Equilibrium Green's Function (NEGF) transport formalism for open systems. In particular, we derive the Jauho-Meir-Wingreen formula for the time-dependent current through an interacting sample coupled to non-interacting leads. Our proof is non-perturbative and uses neither complex-time Keldysh contours, nor Langreth rules of 'analytic continuation'. We also discuss other technical identities (Langreth, Keldysh) involving various many body Green's functions. Finally, we study the Dyson equation for the advanced/retarded interacting Green's function and we rigorously construct its (irreducible) self-energy, using the theory of Volterra operators.
NASA Astrophysics Data System (ADS)
Riendeau, Diane
2012-09-01
To date, this column has presented videos to show in class, Don Mathieson from Tulsa Community College suggested that YouTube could be used in another fashion. In Don's experience, his students are not always prepared for the mathematic rigor of his course. Even at the high school level, math can be a barrier for physics students. Walid Shihabi, a colleague of Don's, decided to compile a list of YouTube videos that his students could watch to relearn basic mathematics. I thought this sounded like a fantastic idea and a great service to the students. Walid graciously agreed to share his list and I have reproduced a large portion of it below.
Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics
NASA Astrophysics Data System (ADS)
Woods, Mischa
2013-03-01
This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
ERIC Educational Resources Information Center
Eisenhart, Margaret; Weis, Lois; Allen, Carrie D.; Cipollone, Kristin; Stich, Amy; Dominguez, Rachel
2015-01-01
In response to numerous calls for more rigorous STEM (science, technology, engineering, and mathematics) education to improve US competitiveness and the job prospects of next-generation workers, especially those from low-income and minority groups, a growing number of schools emphasizing STEM have been established in the US over the past decade.…
ERIC Educational Resources Information Center
van der Scheer, Emmelien A.; Visscher, Adrie J.
2018-01-01
Data-based decision making (DBDM) is an important element of educational policy in many countries, as it is assumed that student achievement will improve if teachers worked in a data-based way. However, studies that evaluate rigorously the effects of DBDM on student achievement are scarce. In this study, the effects of an intensive…
ERIC Educational Resources Information Center
Randel, Bruce; Beesley, Andrea D.; Apthorp, Helen; Clark, Tedra F.; Wang, Xin; Cicchinelli, Louis F.; Williams, Jean M.
2011-01-01
This study was conducted by the Central Region Educational Laboratory (REL Central) administered by Mid-continent Research for Education and Learning to provide educators and policymakers with rigorous evidence about the potential of Classroom Assessment for Student Learning (CASL) to improve student achievement. CASL is a widely used professional…
How PARCC's False Rigor Stunts the Academic Growth of All Students. White Paper No. 135
ERIC Educational Resources Information Center
McQuillan, Mark; Phelps, Richard P.; Stotsky, Sandra
2015-01-01
In July 2010, the Massachusetts Board of Elementary and Secondary Education (BESE) voted to adopt Common Core's standards in English language arts (ELA) and mathematics in place of the state's own standards in these two subjects. The vote was based largely on recommendations by Commissioner of Education Mitchell Chester and then Secretary of…
ERIC Educational Resources Information Center
Courtade, Ginevra R.; Shipman, Stacy D.; Williams, Rachel
2017-01-01
SPLASH is a 3-year professional development program designed to work with classroom teachers of students with moderate and severe disabilities. The program targets new teachers and employs methods aimed at supporting rural classrooms. The training content focuses on evidence-based practices in English language arts, mathematics, and science, as…
Results of the Salish Projects: Summary and Implications for Science Teacher Education
ERIC Educational Resources Information Center
Yager, Robert E.; Simmons, Patricia
2013-01-01
Science teaching and teacher education in the U.S.A. have been of great national interest recently due to a severe shortage of science (and mathematics) teachers who do not hold strong qualifications in their fields of study. Unfortunately we lack a rigorous research base that helps inform solid practices about various models or elements of…
ERIC Educational Resources Information Center
Stoneberg, Bert D.
2015-01-01
The National Center of Education Statistics conducted a mapping study that equated the percentage proficient or above on each state's NCLB reading and mathematics tests in grades 4 and 8 to the NAEP scale. Each "NAEP equivalent score" was labeled according to NAEP's achievement levels and used to compare state proficiency standards and…
ERIC Educational Resources Information Center
Amador-Lankster, Clara
2018-01-01
The purpose of this article is to discuss a Fulbright Evaluation Framework and to analyze findings resulting from implementation of two contextualized measures designed as LEARNING BY DOING in response to achievement expectations from the National Education Ministry in Colombia in three areas. The goal of the Fulbright funded project was to…
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Jones index, secret sharing and total quantum dimension
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter; Osborne, Tobias J.
2017-02-01
We study the total quantum dimension in the thermodynamic limit of topologically ordered systems. In particular, using the anyons (or superselection sectors) of such models, we define a secret sharing scheme, storing information invisible to a malicious party, and argue that the total quantum dimension quantifies how well we can perform this task. We then argue that this can be made mathematically rigorous using the index theory of subfactors, originally due to Jones and later extended by Kosaki and Longo. This theory provides us with a ‘relative entropy’ of two von Neumann algebras and a quantum channel, and we argue how these can be used to quantify how much classical information two parties can hide form an adversary. We also review the total quantum dimension in finite systems, in particular how it relates to topological entanglement entropy. It is known that the latter also has an interpretation in terms of secret sharing schemes, although this is shown by completely different methods from ours. Our work provides a different and independent take on this, which at the same time is completely mathematically rigorous. This complementary point of view might be beneficial, for example, when studying the stability of the total quantum dimension when the system is perturbed.
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1985-01-01
A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.
Mathematics Education and the Objectivist Programme in HPS
NASA Astrophysics Data System (ADS)
Glas, Eduard
2013-06-01
Using history of mathematics for studying concepts, methods, problems and other internal features of the discipline may give rise to a certain tension between descriptive adequacy and educational demands. Other than historians, educators are concerned with mathematics as a normatively defined discipline. Teaching cannot but be based on a pre-understanding of what mathematics `is' or, in other words, on a normative (methodological, philosophical) view of the identity or nature of the discipline. Educators are primarily concerned with developments at the level of objective mathematical knowledge, that is: with the relations between successive theories, problems and proposed solutions—relations which are independent of whatever has been the role of personal or collective beliefs, convictions, traditions and other historical circumstances. Though not exactly `historical' in the usual sense, I contend that this `objectivist' approach does represent one among other entirely legitimate and valuable approaches to the historical development of mathematics. Its retrospective importance to current practitioners and students is illustrated by a reconstruction of the development of Eudoxus's theory of proportionality in response to the problem of irrationality, and the way in which Dedekind some two millennia later almost literally used this ancient theory for the rigorous introduction of irrational numbers and hence of the real number continuum.
A mathematical approach to beam matching
Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N
2013-01-01
Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874
Weber, Gerhard-Wilhelm; Ozöğür-Akyüz, Süreyya; Kropat, Erik
2009-06-01
An emerging research area in computational biology and biotechnology is devoted to mathematical modeling and prediction of gene-expression patterns; it nowadays requests mathematics to deeply understand its foundations. This article surveys data mining and machine learning methods for an analysis of complex systems in computational biology. It mathematically deepens recent advances in modeling and prediction by rigorously introducing the environment and aspects of errors and uncertainty into the genetic context within the framework of matrix and interval arithmetics. Given the data from DNA microarray experiments and environmental measurements, we extract nonlinear ordinary differential equations which contain parameters that are to be determined. This is done by a generalized Chebychev approximation and generalized semi-infinite optimization. Then, time-discretized dynamical systems are studied. By a combinatorial algorithm which constructs and follows polyhedra sequences, the region of parametric stability is detected. In addition, we analyze the topological landscape of gene-environment networks in terms of structural stability. As a second strategy, we will review recent model selection and kernel learning methods for binary classification which can be used to classify microarray data for cancerous cells or for discrimination of other kind of diseases. This review is practically motivated and theoretically elaborated; it is devoted to a contribution to better health care, progress in medicine, a better education, and more healthy living conditions.
Property-Based Software Engineering Measurement
NASA Technical Reports Server (NTRS)
Briand, Lionel; Morasca, Sandro; Basili, Victor R.
1995-01-01
Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysis, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact, and rigorous, because it is based on precise mathematical concepts. This framework defines several important measurement concepts (size, length, complexity, cohesion, coupling). It is not intended to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalism and properties we introduce are convenient and intuitive. In addition, we have reviewed the literature on this subject and compared it with our work. This framework contributes constructively to a firmer theoretical ground of software measurement.
California and the "Common Core": Will There Be a New Debate about K-12 Standards?
ERIC Educational Resources Information Center
EdSource, 2010
2010-01-01
A growing chorus of state and federal policymakers, large foundations, and business leaders across the country are calling for states to adopt a common, rigorous body of college- and career-ready skills and knowledge in English and mathematics that all K-12 students will be expected to master by the time they graduate. This report looks at the…
ERIC Educational Resources Information Center
Kushman, Jim; Hanita, Makoto; Raphael, Jacqueline
2011-01-01
Students entering high school face many new academic challenges. One of the most important is their ability to read and understand more complex text in literature, mathematics, science, and social studies courses as they navigate through a rigorous high school curriculum. The Regional Educational Laboratory (REL) Northwest conducted a study to…
A single-cell spiking model for the origin of grid-cell patterns
Kempter, Richard
2017-01-01
Spatial cognition in mammals is thought to rely on the activity of grid cells in the entorhinal cortex, yet the fundamental principles underlying the origin of grid-cell firing are still debated. Grid-like patterns could emerge via Hebbian learning and neuronal adaptation, but current computational models remained too abstract to allow direct confrontation with experimental data. Here, we propose a single-cell spiking model that generates grid firing fields via spike-rate adaptation and spike-timing dependent plasticity. Through rigorous mathematical analysis applicable in the linear limit, we quantitatively predict the requirements for grid-pattern formation, and we establish a direct link to classical pattern-forming systems of the Turing type. Our study lays the groundwork for biophysically-realistic models of grid-cell activity. PMID:28968386
The sympathy of two pendulum clocks: beyond Huygens' observations.
Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin
2016-03-29
This paper introduces a modern version of the classical Huygens' experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks--ad hoc designed and fabricated--which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit 'sympathetic' motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated.
A primer on thermodynamic-based models for deciphering transcriptional regulatory logic.
Dresch, Jacqueline M; Richards, Megan; Ay, Ahmet
2013-09-01
A rigorous analysis of transcriptional regulation at the DNA level is crucial to the understanding of many biological systems. Mathematical modeling has offered researchers a new approach to understanding this central process. In particular, thermodynamic-based modeling represents the most biophysically informed approach aimed at connecting DNA level regulatory sequences to the expression of specific genes. The goal of this review is to give biologists a thorough description of the steps involved in building, analyzing, and implementing a thermodynamic-based model of transcriptional regulation. The data requirements for this modeling approach are described, the derivation for a specific regulatory region is shown, and the challenges and future directions for the quantitative modeling of gene regulation are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
LeBeau, Brandon; Harwell, Michael; Monson, Debra; Dupuis, Danielle; Medhanie, Amanuel; Post, Thomas R.
2012-04-01
Background: The importance of increasing the number of US college students completing degrees in science, technology, engineering or mathematics (STEM) has prompted calls for research to provide a better understanding of factors related to student participation in these majors, including the impact of a student's high-school mathematics curriculum. Purpose: This study examines the relationship between various student and high-school characteristics and completion of a STEM major in college. Of specific interest is the influence of a student's high-school mathematics curriculum on the completion of a STEM major in college. Sample: The sample consisted of approximately 3500 students from 229 high schools. Students were predominantly Caucasian (80%), with slightly more males than females (52% vs 48%). Design and method: A quasi-experimental design with archival data was used for students who enrolled in, and graduated from, a post-secondary institution in the upper Midwest. To be included in the sample, students needed to have completed at least three years of high-school mathematics. A generalized linear mixed model was used with students nested within high schools. The data were cross-sectional. Results: High-school predictors were not found to have a significant impact on the completion of a STEM major. Significant student-level predictors included ACT mathematics score, gender and high-school mathematics GPA. Conclusions: The results provide evidence that on average students are equally prepared for the rigorous mathematics coursework regardless of the high-school mathematics curriculum they completed.
Seismic waves and earthquakes in a global monolithic model
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications
Han, Katherine; Chang, Chih-Hung
2014-01-01
This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287
Information flow and causality as rigorous notions ab initio
NASA Astrophysics Data System (ADS)
Liang, X. San
2016-11-01
Information flow or information transfer the widely applicable general physics notion can be rigorously derived from first principles, rather than axiomatically proposed as an ansatz. Its logical association with causality is firmly rooted in the dynamical system that lies beneath. The principle of nil causality that reads, an event is not causal to another if the evolution of the latter is independent of the former, which transfer entropy analysis and Granger causality test fail to verify in many situations, turns out to be a proven theorem here. Established in this study are the information flows among the components of time-discrete mappings and time-continuous dynamical systems, both deterministic and stochastic. They have been obtained explicitly in closed form, and put to applications with the benchmark systems such as the Kaplan-Yorke map, Rössler system, baker transformation, Hénon map, and stochastic potential flow. Besides unraveling the causal relations as expected from the respective systems, some of the applications show that the information flow structure underlying a complex trajectory pattern could be tractable. For linear systems, the resulting remarkably concise formula asserts analytically that causation implies correlation, while correlation does not imply causation, providing a mathematical basis for the long-standing philosophical debate over causation versus correlation.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
A Transformative Model for Undergraduate Quantitative Biology Education
Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Statistical hydrodynamics and related problems in spaces of probability measures
NASA Astrophysics Data System (ADS)
Dostoglou, Stamatios
2017-11-01
A rigorous theory of statistical solutions of the Navier-Stokes equations, suitable for exploring Kolmogorov's ideas, has been developed by M.I. Vishik and A.V. Fursikov, culminating in their monograph "Mathematical problems of Statistical Hydromechanics." We review some progress made in recent years following this approach, with emphasis on problems concerning the correlation of velocities and corresponding questions in the space of probability measures on Hilbert spaces.
ACM TOMS replicated computational results initiative
Heroux, Michael Allen
2015-06-03
In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.
Improved mathematical and computational tools for modeling photon propagation in tissue
NASA Astrophysics Data System (ADS)
Calabro, Katherine Weaver
Light interacts with biological tissue through two predominant mechanisms: scattering and absorption, which are sensitive to the size and density of cellular organelles, and to biochemical composition (ex. hemoglobin), respectively. During the progression of disease, tissues undergo a predictable set of changes in cell morphology and vascularization, which directly affect their scattering and absorption properties. Hence, quantification of these optical property differences can be used to identify the physiological biomarkers of disease with interest often focused on cancer. Diffuse reflectance spectroscopy is a diagnostic tool, wherein broadband visible light is transmitted through a fiber optic probe into a turbid medium, and after propagating through the sample, a fraction of the light is collected at the surface as reflectance. The measured reflectance spectrum can be analyzed with appropriate mathematical models to extract the optical properties of the tissue, and from these, a set of physiological properties. A number of models have been developed for this purpose using a variety of approaches -- from diffusion theory, to computational simulations, and empirical observations. However, these models are generally limited to narrow ranges of tissue and probe geometries. In this thesis, reflectance models were developed for a much wider range of measurement parameters, and influences such as the scattering phase function and probe design were investigated rigorously for the first time. The results provide a comprehensive understanding of the factors that influence reflectance, with novel insights that, in some cases, challenge current assumptions in the field. An improved Monte Carlo simulation program, designed to run on a graphics processing unit (GPU), was built to simulate the data used in the development of the reflectance models. Rigorous error analysis was performed to identify how inaccuracies in modeling assumptions can be expected to affect the accuracy of extracted optical property values from experimentally-acquired reflectance spectra. From this analysis, probe geometries that offer the best robustness against error in estimation of physiological properties from tissue, are presented. Finally, several in vivo studies demonstrating the use of reflectance spectroscopy for both research and clinical applications are presented.
Dóka, Éva; Lente, Gábor
2017-04-13
This work presents a rigorous mathematical study of the effect of unavoidable inhomogeneities in laser flash photolysis experiments. There are two different kinds of inhomegenities: the first arises from diffusion, whereas the second one has geometric origins (the shapes of the excitation and detection light beams). Both of these are taken into account in our reported model, which gives rise to a set of reaction-diffusion type partial differential equations. These equations are solved by a specially developed finite volume method. As an example, the aqueous reaction between the sulfate ion radical and iodide ion is used, for which sufficiently detailed experimental data are available from an earlier publication. The results showed that diffusion itself is in general too slow to influence the kinetic curves on the usual time scales of laser flash photolysis experiments. However, the use of the absorbances measured (e.g., to calculate the molar absorption coefficients of transient species) requires very detailed mathematical consideration and full knowledge of the geometrical shapes of the excitation laser beam and the separate detection light beam. It is also noted that the usual pseudo-first-order approach to evaluating the kinetic traces can be used successfully even if the usual large excess condition is not rigorously met in the reaction cell locally.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.
1979-01-01
The Vee-Trough/Vacuum Tube Collector (VTVTC) aimed to improve the efficiency and reduce the cost of collectors assembled from evacuated tube receivers. The VTVTC was analyzed rigorously and a mathematical model was developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized GEB Teflon. Tests were run at temperatures ranging from 95 to 180 C during the months of April, May, June, July and August 1977. Vee-trough collector efficiencies of 35-40 per cent were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Test data covering a complete day are presented for selected dates throughout the test season. Predicted daily useful heat collection and efficiency values are presented for a year's duration at operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with an economic evaluation.
Mathematical analysis of the multiband BCS gap equations in superconductivity
NASA Astrophysics Data System (ADS)
Yang, Yisong
2005-01-01
In this paper, we present a mathematical analysis for the phonon-dominated multiband isotropic and anisotropic BCS gap equations at any finite temperature T. We establish the existence of a critical temperature T so that, when T
Mokhtari, Amir; Oryang, David; Chen, Yuhuan; Pouillot, Regis; Van Doren, Jane
2018-01-08
We developed a probabilistic mathematical model for the postharvest processing of leafy greens focusing on Escherichia coli O157:H7 contamination of fresh-cut romaine lettuce as the case study. Our model can (i) support the investigation of cross-contamination scenarios, and (ii) evaluate and compare different risk mitigation options. We used an agent-based modeling framework to predict the pathogen prevalence and levels in bags of fresh-cut lettuce and quantify spread of E. coli O157:H7 from contaminated lettuce to surface areas of processing equipment. Using an unbalanced factorial design, we were able to propagate combinations of random values assigned to model inputs through different processing steps and ranked statistically significant inputs with respect to their impacts on selected model outputs. Results indicated that whether contamination originated on incoming lettuce heads or on the surface areas of processing equipment, pathogen prevalence among bags of fresh-cut lettuce and batches was most significantly impacted by the level of free chlorine in the flume tank and frequency of replacing the wash water inside the tank. Pathogen levels in bags of fresh-cut lettuce were most significantly influenced by the initial levels of contamination on incoming lettuce heads or surface areas of processing equipment. The influence of surface contamination on pathogen prevalence or levels in fresh-cut bags depended on the location of that surface relative to the flume tank. This study demonstrates that developing a flexible yet mathematically rigorous modeling tool, a "virtual laboratory," can provide valuable insights into the effectiveness of individual and combined risk mitigation options. © 2018 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
The sympathy of two pendulum clocks: beyond Huygens’ observations
Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin
2016-01-01
This paper introduces a modern version of the classical Huygens’ experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks—ad hoc designed and fabricated—which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit ‘sympathetic’ motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated. PMID:27020903
Model of dissolution in the framework of tissue engineering and drug delivery.
Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R
2018-05-22
Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.
On the relation between phase-field crack approximation and gradient damage modelling
NASA Astrophysics Data System (ADS)
Steinke, Christian; Zreid, Imadeddin; Kaliske, Michael
2017-05-01
The finite element implementation of a gradient enhanced microplane damage model is compared to a phase-field model for brittle fracture. Phase-field models and implicit gradient damage models share many similarities despite being conceived from very different standpoints. In both approaches, an additional differential equation and a length scale are introduced. However, while the phase-field method is formulated starting from the description of a crack in fracture mechanics, the gradient method starts from a continuum mechanics point of view. At first, the scope of application for both models is discussed to point out intersections. Then, the analysis of the employed mathematical methods and their rigorous comparison are presented. Finally, numerical examples are introduced to illustrate the findings of the comparison which are summarized in a conclusion at the end of the paper.
NASA Astrophysics Data System (ADS)
Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem
2018-05-01
In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick
2016-11-01
Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.
ERIC Educational Resources Information Center
Neri, Rebecca; Lozano, Maritza; Chang, Sandy; Herman, Joan
2016-01-01
New college and career ready standards (CCRS) have established more rigorous expectations of learning for all learners, including English learner (EL) students, than what was expected in previous standards. A common feature in these new content-area standards, such as the Common Core State Standards in English language arts and mathematics and the…
Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.
1986-09-01
respectively. Here h is a parameter which is usually related to the size of the grid associated with the finite element partitioning of Q. Then one... grid and of not at least performing serious mesh refinement studies. It also points out the usefulness of rigorous results concerning the stability...overconstrained the .1% approximate velocity field. However, by employing different grids for the ’z pressure and velocity fields, the linear-constant
Advanced Extremely High Frequency Satellite (AEHF)
2015-12-01
control their tactical and strategic forces at all levels of conflict up to and including general nuclear war, and it supports the attainment of...10195.1 10622.2 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE) that supports the AEHF SV 1-4, like all life-cycle cost...mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in methods used in building
2015-12-01
system level testing. The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
All biology is computational biology.
Markowetz, Florian
2017-03-01
Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
OLED emission zone measurement with high accuracy
NASA Astrophysics Data System (ADS)
Danz, N.; MacCiarnain, R.; Michaelis, D.; Wehlus, T.; Rausch, A. F.; Wächter, C. A.; Reusch, T. C. G.
2013-09-01
Highly efficient state of the art organic light-emitting diodes (OLED) comprise thin emitting layers with thicknesses in the order of 10 nm. The spatial distribution of the photon generation rate, i.e. the profile of the emission zone, inside these layers is of interest for both device efficiency analysis and characterization of charge recombination processes. It can be accessed experimentally by reverse simulation of far-field emission pattern measurements. Such a far-field pattern is the sum of individual emission patterns associated with the corresponding positions inside the active layer. Based on rigorous electromagnetic theory the relation between far-field pattern and emission zone is modeled as a linear problem. This enables a mathematical analysis to be applied to the cases of single and double emitting layers in the OLED stack as well as to pattern measurements in air or inside the substrate. From the results, guidelines for optimum emitter - cathode separation and for selecting the best experimental approach are obtained. Limits for the maximum spatial resolution can be derived.
A global solution to the Schrödinger equation: From Henstock to Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu
2015-09-15
One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less
The long-time dynamics of two hydrodynamically-coupled swimming cells.
Michelin, Sébastien; Lauga, Eric
2010-05-01
Swimming microorganisms such as bacteria or spermatozoa are typically found in dense suspensions, and exhibit collective modes of locomotion qualitatively different from that displayed by isolated cells. In the dilute limit where fluid-mediated interactions can be treated rigorously, the long-time hydrodynamics of a collection of cells result from interactions with many other cells, and as such typically eludes an analytical approach. Here, we consider the only case where such problem can be treated rigorously analytically, namely when the cells have spatially confined trajectories, such as the spermatozoa of some marine invertebrates. We consider two spherical cells swimming, when isolated, with arbitrary circular trajectories, and derive the long-time kinematics of their relative locomotion. We show that in the dilute limit where the cells are much further away than their size, and the size of their circular motion, a separation of time scale occurs between a fast (intrinsic) swimming time, and a slow time where hydrodynamic interactions lead to change in the relative position and orientation of the swimmers. We perform a multiple-scale analysis and derive the effective dynamical system--of dimension two--describing the long-time behavior of the pair of cells. We show that the system displays one type of equilibrium, and two types of rotational equilibrium, all of which are found to be unstable. A detailed mathematical analysis of the dynamical systems further allows us to show that only two cell-cell behaviors are possible in the limit of t-->infinity, either the cells are attracted to each other (possibly monotonically), or they are repelled (possibly monotonically as well), which we confirm with numerical computations. Our analysis shows therefore that, even in the dilute limit, hydrodynamic interactions lead to new modes of cell-cell locomotion.
NASA Astrophysics Data System (ADS)
Osman, Sharifah; Mohammad, Shahrin; Abu, Mohd Salleh
2015-05-01
Mathematics and engineering are inexorably and significantly linked and essentially required in analyzing and accessing thought to make good judgment when dealing in complex and varied engineering problems. A study in the current engineering education curriculum to explore how the critical thinking and mathematical thinking relates to one another, is therefore timely crucial. Unfortunately, there is not much information available explicating about the link. This paper aims to report findings of a critical review as well as to provide brief description of an on-going research aimed to investigate the dispositions of critical thinking and the relationship and integration between critical thinking and mathematical thinking during the execution of civil engineering tasks. The first part of the paper reports an in-depth review on these matters based on rather limited resources. The review showed a considerable form of congruency between these two perspectives of thinking, with some prevalent trends of engineering workplace tasks, problems and challenges. The second part describes an on-going research to be conducted by the researcher to investigate rigorously the relationship and integration between these two types of thinking within the perspective of civil engineering tasks. A reasonably close non-participant observations and semi-structured interviews will be executed for the pilot and main stages of the study. The data will be analyzed using constant comparative analysis in which the grounded theory methodology will be adopted. The findings will serve as a useful grounding for constructing a substantive theory revealing the integral relationship between critical thinking and mathematical thinking in the real civil engineering practice context. The substantive theory, from an angle of view, is expected to contribute some additional useful information to the engineering program outcomes and engineering education instructions, aligns with the expectations of engineering program outcomes set by the Engineering Accreditation Council.
Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
2000-01-01
This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.
Tropical atmospheric circulations with humidity effects.
Hsia, Chun-Hsiung; Lin, Chang-Shou; Ma, Tian; Wang, Shouhong
2015-01-08
The main objective of this article is to study the effect of the moisture on the planetary scale atmospheric circulation over the tropics. The modelling we adopt is the Boussinesq equations coupled with a diffusive equation of humidity, and the humidity-dependent heat source is modelled by a linear approximation of the humidity. The rigorous mathematical analysis is carried out using the dynamic transition theory. In particular, we obtain mixed transitions, also known as random transitions, as described in Ma & Wang (2010 Discrete Contin. Dyn. Syst. 26 , 1399-1417. (doi:10.3934/dcds.2010.26.1399); 2011 Adv. Atmos. Sci. 28 , 612-622. (doi:10.1007/s00376-010-9089-0)). The analysis also indicates the need to include turbulent friction terms in the model to obtain correct convection scales for the large-scale tropical atmospheric circulations, leading in particular to the right critical temperature gradient and the length scale for the Walker circulation. In short, the analysis shows that the effect of moisture lowers the magnitude of the critical thermal Rayleigh number and does not change the essential characteristics of dynamical behaviour of the system.
Lenas, Petros; Moos, Malcolm; Luyten, Frank P
2009-12-01
The field of tissue engineering is moving toward a new concept of "in vitro biomimetics of in vivo tissue development." In Part I of this series, we proposed a theoretical framework integrating the concepts of developmental biology with those of process design to provide the rules for the design of biomimetic processes. We named this methodology "developmental engineering" to emphasize that it is not the tissue but the process of in vitro tissue development that has to be engineered. To formulate the process design rules in a rigorous way that will allow a computational design, we should refer to mathematical methods to model the biological process taking place in vitro. Tissue functions cannot be attributed to individual molecules but rather to complex interactions between the numerous components of a cell and interactions between cells in a tissue that form a network. For tissue engineering to advance to the level of a technologically driven discipline amenable to well-established principles of process engineering, a scientifically rigorous formulation is needed of the general design rules so that the behavior of networks of genes, proteins, or cells that govern the unfolding of developmental processes could be related to the design parameters. Now that sufficient experimental data exist to construct plausible mathematical models of many biological control circuits, explicit hypotheses can be evaluated using computational approaches to facilitate process design. Recent progress in systems biology has shown that the empirical concepts of developmental biology that we used in Part I to extract the rules of biomimetic process design can be expressed in rigorous mathematical terms. This allows the accurate characterization of manufacturing processes in tissue engineering as well as the properties of the artificial tissues themselves. In addition, network science has recently shown that the behavior of biological networks strongly depends on their topology and has developed the necessary concepts and methods to describe it, allowing therefore a deeper understanding of the behavior of networks during biomimetic processes. These advances thus open the door to a transition for tissue engineering from a substantially empirical endeavor to a technology-based discipline comparable to other branches of engineering.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
NASA Astrophysics Data System (ADS)
Holmes, Mark H.
2006-10-01
To help students grasp the intimate connections that exist between mathematics and its applications in other disciplines a library of interactive learning modules was developed. This library covers the mathematical areas normally studied by undergraduate students and is used in science courses at all levels. Moreover, the library is designed not just to provide critical connections across disciplines but to also provide longitudinal subject reinforcement as students progress in their studies. In the process of developing the modules a complete editing and publishing system was constructed that is optimized for automated maintenance and upgradeability of materials. The result is a single integrated production system for web-based educational materials. Included in this is a rigorous assessment program, involving both internal and external evaluations of each module. As will be seen, the formative evaluation obtained during the development of the library resulted in the modules successfully bridging multiple disciplines and breaking down the disciplinary barriers commonly found in their math and non-math courses.
War of ontology worlds: mathematics, computer code, or Esperanto?
Rzhetsky, Andrey; Evans, James A
2011-09-01
The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
Inferring the source of evaporated waters using stable H and O isotopes
NASA Astrophysics Data System (ADS)
Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.
2017-12-01
Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.
2010-10-18
August 2010 was building the right game “ – World of Warcraft has 30% women (according to womengamers.com) Conclusion: – We don’t really understand why...Report of the National Academies on Informal Learning • Infancy - late adulthood: Learn about the world & develop important skills for science...Education With Rigor and Vigor – Excitement, interest, and motivation to learn about phenomena in the natural and physical world . – Generate
A Center of Excellence in the Mathematical Sciences - at Cornell University
1992-03-01
of my recent efforts go in two directions. 1. Cellular Automata. The Greenberg Hastings model is a simple system that models the behavior of an... Greenberg -Hastings Model. We also obtained results concerning the crucial value for a threshold voter model. This resulted in the papers "Some Rigorous...Results for the Greenberg - Hastings Model" and "Fixation Results for Threshold Voter Systems." Together with Scot Adams, I wrote "An Application of the
NASA Astrophysics Data System (ADS)
Böbel, A.; Knapek, C. A.; Räth, C.
2018-05-01
Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski tensor analysis turns out to be a powerful tool for investigations of crystallization processes. It is capable of revealing nonlinear local topological properties, however, still provides easily interpretable results founded on a solid mathematical framework.
Are computational models of any use to psychiatry?
Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan
2011-08-01
Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.
A rigorous computational approach to linear response
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Galatolo, Stefano; Nisoli, Isaia; Niu, Xiaolong
2018-03-01
We present a general setting in which the formula describing the linear response of the physical measure of a perturbed system can be obtained. In this general setting we obtain an algorithm to rigorously compute the linear response. We apply our results to expanding circle maps. In particular, we present examples where we compute, up to a pre-specified error in the L∞ -norm, the response of expanding circle maps under stochastic and deterministic perturbations. Moreover, we present an example where we compute, up to a pre-specified error in the L 1-norm, the response of the intermittent family at the boundary; i.e. when the unperturbed system is the doubling map. This work was mainly conducted during a visit of SG to Loughborough University. WB and SG would like to thank The Leverhulme Trust for supporting mutual research visits through the Network Grant IN-2014-021. SG thanks the Department of Mathematical Sciences at Loughborough University for hospitality. WB thanks Dipartimento di Matematica, Universita di Pisa. The research of SG and IN is partially supported by EU Marie-Curie IRSES ‘Brazilian-European partnership in Dynamical Systems’ (FP7-PEOPLE-2012-IRSES 318999 BREUDS). IN was partially supported by CNPq and FAPERJ. IN would like to thank the Department of Mathematics at Uppsala University and the support of the KAW grant 2013.0315.
Steady-state and dynamic models for particle engulfment during solidification
NASA Astrophysics Data System (ADS)
Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.
2016-06-01
Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.
Continuum mechanics and thermodynamics in the Hamilton and the Godunov-type formulations
NASA Astrophysics Data System (ADS)
Peshkov, Ilya; Pavelka, Michal; Romenski, Evgeniy; Grmela, Miroslav
2018-01-01
Continuum mechanics with dislocations, with the Cattaneo-type heat conduction, with mass transfer, and with electromagnetic fields is put into the Hamiltonian form and into the form of the Godunov-type system of the first-order, symmetric hyperbolic partial differential equations (SHTC equations). The compatibility with thermodynamics of the time reversible part of the governing equations is mathematically expressed in the former formulation as degeneracy of the Hamiltonian structure and in the latter formulation as the existence of a companion conservation law. In both formulations the time irreversible part represents gradient dynamics. The Godunov-type formulation brings the mathematical rigor (the local well posedness of the Cauchy initial value problem) and the possibility to discretize while keeping the physical content of the governing equations (the Godunov finite volume discretization).
Observations of fallibility in applications of modern programming methodologies
NASA Technical Reports Server (NTRS)
Gerhart, S. L.; Yelowitz, L.
1976-01-01
Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.
Weiland, Christina
2016-11-01
Theory and empirical work suggest inclusion preschool improves the school readiness of young children with special needs, but only 2 studies of the model have used rigorous designs that could identify causality. The present study examined the impacts of the Boston Public prekindergarten program-which combined proven language, literacy, and mathematics curricula with coaching-on the language, literacy, mathematics, executive function, and emotional skills of young children with special needs (N = 242). Children with special needs benefitted from the program in all examined domains. Effects were on par with or surpassed those of their typically developing peers. Results are discussed in the context of their relevance for policy, practice, and theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Surface conservation laws at microscopically diffuse interfaces.
Chu, Kevin T; Bazant, Martin Z
2007-11-01
In studies of interfaces with dynamic chemical composition, bulk and interfacial quantities are often coupled via surface conservation laws of excess surface quantities. While this approach is easily justified for microscopically sharp interfaces, its applicability in the context of microscopically diffuse interfaces is less theoretically well-established. Furthermore, surface conservation laws (and interfacial models in general) are often derived phenomenologically rather than systematically. In this article, we first provide a mathematically rigorous justification for surface conservation laws at diffuse interfaces based on an asymptotic analysis of transport processes in the boundary layer and derive general formulae for the surface and normal fluxes that appear in surface conservation laws. Next, we use nonequilibrium thermodynamics to formulate surface conservation laws in terms of chemical potentials and provide a method for systematically deriving the structure of the interfacial layer. Finally, we derive surface conservation laws for a few examples from diffusive and electrochemical transport.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Interpretation of HCMM images: A regional study
NASA Technical Reports Server (NTRS)
1982-01-01
Potential users of HCMM data, especially those with only a cursory background in thermal remote sensing are familiarized with the kinds of information contained in the images that can be extracted with some reliability solely from inspection of such standard products as those generated at NASA/GSFC and now achieved in the National Space Science Data Center. Visual analysis of photoimagery is prone to various misimpressions and outright errors brought on by unawareness of the influence of physical factors as well as by sometimes misleading tonal patterns introduced during photoprocessing. The quantitative approach, which relies on computer processing of digital HCMM data, field measurements, and integration of rigorous mathematical models, can usually be used to identify, compensate for, or correct the contributions from at least some of the natural factors and those associated with photoprocessing. Color composite, day-IR, night-IR and visible images of California and Nevada are examined.
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.
Overarching framework for data-based modelling
NASA Astrophysics Data System (ADS)
Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco
2014-02-01
One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, RIcky W.; Munoz, Cesar A.
2008-01-01
We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-01-01
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic—mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. PMID:23199912
Annual cycle of Scots pine photosynthesis
NASA Astrophysics Data System (ADS)
Hari, Pertti; Kerminen, Veli-Matti; Kulmala, Liisa; Kulmala, Markku; Noe, Steffen; Petäjä, Tuukka; Vanhatalo, Anni; Bäck, Jaana
2017-12-01
Photosynthesis, i.e. the assimilation of atmospheric carbon to organic molecules with the help of solar energy, is a fundamental and well-understood process. Here, we connect theoretically the fundamental concepts affecting C3 photosynthesis with the main environmental drivers (ambient temperature and solar light intensity), using six axioms based on physiological and physical knowledge, and yield straightforward and simple mathematical equations. The light and carbon reactions in photosynthesis are based on the coherent operation of the photosynthetic machinery, which is formed of a complicated chain of enzymes, membrane pumps and pigments. A powerful biochemical regulation system has emerged through evolution to match photosynthesis with the annual cycle of solar light and temperature. The action of the biochemical regulation system generates the annual cycle of photosynthesis and emergent properties, the state of the photosynthetic machinery and the efficiency of photosynthesis. The state and the efficiency of the photosynthetic machinery is dynamically changing due to biosynthesis and decomposition of the molecules. The mathematical analysis of the system, defined by the very fundamental concepts and axioms, resulted in exact predictions of the behaviour of daily and annual patterns in photosynthesis. We tested the predictions with extensive field measurements of Scots pine (Pinus sylvestris L.) photosynthesis on a branch scale in northern Finland. Our theory gained strong support through rigorous testing.
A white noise approach to the Feynman integrand for electrons in random media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grothaus, M., E-mail: grothaus@mathematik.uni-kl.de; Riemann, F., E-mail: riemann@mathematik.uni-kl.de; Suryawan, H. P., E-mail: suryawan@mathematik.uni-kl.de
2014-01-15
Using the Feynman path integral representation of quantum mechanics it is possible to derive a model of an electron in a random system containing dense and weakly coupled scatterers [see F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)]. The main goal of this paper is to give a mathematically rigorous realization of the corresponding Feynman integrand in dimension one based on the theory of white noise analysis. We refine and apply a Wick formula for the product of a square-integrable function with Donsker's delta functions and usemore » a method of complex scaling. As an essential part of the proof we also establish the existence of the exponential of the self-intersection local times of a one-dimensional Brownian bridge. As a result we obtain a neat formula for the propagator with identical start and end point. Thus, we obtain a well-defined mathematical object which is used to calculate the density of states [see, e.g., F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)].« less
Mathematics education practice in Nigeria: Its impact in a post-colonial era
NASA Astrophysics Data System (ADS)
Enime, Noble O. J.
This qualitative research method of study examined the impacts of the Nigerian pre-independence era Mathematics Education Practice on the Post-Colonial era Mathematics Education Practice. The study was designed to gather qualitative information related to Pre-independence and Postcolonial era data related to Mathematics Education Practice in Nigeria (Western, Eastern and the Middle Belt) using interview questions. Data was collected through face to face interviews. Over ten themes emerged from these qualitative interview questions when data was analyzed. Some of the themes emerging from the sub questions were as follows. "Mentally mature to understand the mathematics" and "Not mentally mature to understand the mathematics", "mentally mature to understand the mathematics, with the help of others" and "Not Sure". Others were "Contented with Age of Enrollment" and "Not contented with Age of Enrollment". From the questions of type of school attended and liking of mathematics the following themes emerged: "Attended UPE (Universal Primary Education) and understood Mathematics", and "Attended Standard Education System and did not like Mathematics". Connections between the liking of mathematics and the respondents' eventual careers were seen through the following themes that emerged. "Biological Sciences based career and enjoyed High School Mathematics Experience", "Economics and Business Education based career and enjoyed High School Mathematics Experience" and five more themes. The themes, "Very helpful" and "Unhelpful" emerged from the question concerning parents and students' homework. Some of the themes emerging from the interviews were as follows: "Awesome because of method of Instruction of Mathematics", "Awesome because Mathematics was easy", "Awesome because I had a Good Teacher or Teachers" and four other themes, "Like and dislike of Mathematics", "Heavy work load", "Subject matter content" and "Rigor of instruction". More emerging themes are presented in this document in Chapter IV. The emerging themes suggested that the influence Nigerian Colonial era Mathematics Education Practice had on the independent Nigerian state is yet to completely diminish. The following are among the conclusions drawn n from the study. Student's enrollment age appeared to generally have an influence over the performance in mathematics at all levels of school. Also, students that had encouraging parents were likely to enjoy learning mathematics, while students that attended mission schools were likely to be successful in mathematics. The students whose parents were educated were likely to be successful in Mathematics.
Automated inference procedure for the determination of cell growth parameters
NASA Astrophysics Data System (ADS)
Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.
2016-01-01
The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.
Psychoacoustic entropy theory and its implications for performance practice
NASA Astrophysics Data System (ADS)
Strohman, Gregory J.
This dissertation attempts to motivate, derive and imply potential uses for a generalized perceptual theory of musical harmony called psychoacoustic entropy theory. This theory treats the human auditory system as a physical system which takes acoustic measurements. As a result, the human auditory system is subject to all the appropriate uncertainties and limitations of other physical measurement systems. This is the theoretic basis for defining psychoacoustic entropy. Psychoacoustic entropy is a numerical quantity which indexes the degree to which the human auditory system perceives instantaneous disorder within a sound pressure wave. Chapter one explains the importance of harmonic analysis as a tool for performance practice. It also outlines the critical limitations for many of the most influential historical approaches to modeling harmonic stability, particularly when compared to available scientific research in psychoacoustics. Rather than analyze a musical excerpt, psychoacoustic entropy is calculated directly from sound pressure waves themselves. This frames psychoacoustic entropy theory in the most general possible terms as a theory of musical harmony, enabling it to be invoked for any perceivable sound. Chapter two provides and examines many widely accepted mathematical models of the acoustics and psychoacoustics of these sound pressure waves. Chapter three introduces entropy as a precise way of measuring perceived uncertainty in sound pressure waves. Entropy is used, in combination with the acoustic and psychoacoustic models introduced in chapter two, to motivate the mathematical formulation of psychoacoustic entropy theory. Chapter four shows how to use psychoacoustic entropy theory to analyze the certain types of musical harmonies, while chapter five applies the analytical tools developed in chapter four to two short musical excerpts to influence their interpretation. Almost every form of harmonic analysis invokes some degree of mathematical reasoning. However, the limited scope of most harmonic systems used for Western common practice music greatly simplifies the necessary level of mathematical detail. Psychoacoustic entropy theory requires a greater deal of mathematical complexity due to its sheer scope as a generalized theory of musical harmony. Fortunately, under specific assumptions the theory can take on vastly simpler forms. Psychoacoustic entropy theory appears to be highly compatible with the latest scientific research in psychoacoustics. However, the theory itself should be regarded as a hypothesis and this dissertation an experiment in progress. The evaluation of psychoacoustic entropy theory as a scientific theory of human sonic perception must wait for more rigorous future research.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Endobiogeny: a global approach to systems biology (part 1 of 2).
Lapraz, Jean-Claude; Hedayat, Kamyar M
2013-01-01
Endobiogeny is a global systems approach to human biology that may offer an advancement in clinical medicine based in scientific principles of rigor and experimentation and the humanistic principles of individualization of care and alleviation of suffering with minimization of harm. Endobiogeny is neither a movement away from modern science nor an uncritical embracing of pre-rational methods of inquiry but a synthesis of quantitative and qualitative relationships reflected in a systems-approach to life and based on new mathematical paradigms of pattern recognition.
Solving the multi-frequency electromagnetic inverse source problem by the Fourier method
NASA Astrophysics Data System (ADS)
Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi
2018-07-01
This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.
Understanding the Lomb–Scargle Periodogram
NASA Astrophysics Data System (ADS)
VanderPlas, Jacob T.
2018-05-01
The Lomb–Scargle periodogram is a well-known algorithm for detecting and characterizing periodic signals in unevenly sampled data. This paper presents a conceptual introduction to the Lomb–Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb–Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation.
Selection theory of free dendritic growth in a potential flow.
von Kurnatowski, Martin; Grillenbeck, Thomas; Kassner, Klaus
2013-04-01
The Kruskal-Segur approach to selection theory in diffusion-limited or Laplacian growth is extended via combination with the Zauderer decomposition scheme. This way nonlinear bulk equations become tractable. To demonstrate the method, we apply it to two-dimensional crystal growth in a potential flow. We omit the simplifying approximations used in a preliminary calculation for the same system [Fischaleck, Kassner, Europhys. Lett. 81, 54004 (2008)], thus exhibiting the capability of the method to extend mathematical rigor to more complex problems than hitherto accessible.
A Formal Methods Approach to the Analysis of Mode Confusion
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.
2004-01-01
The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).
A finite element-boundary integral method for cavities in a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. However, due to a lack of rigorous mathematical models for conformal antenna arrays, antenna designers resort to measurement and planar antenna concepts for designing non-planar conformal antennas. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We extend this formulation to conformal arrays on large metallic cylinders. In this report, we develop the mathematical formulation. In particular, we discuss the shape functions, the resulting finite elements and the boundary integral equations, and the solution of the conformal finite element-boundary integral system. Some validation results are presented and we further show how this formulation can be applied with minimal computational and memory resources.
NASA Technical Reports Server (NTRS)
Glytsis, Elias N.; Brundrett, David L.; Gaylord, Thomas K.
1993-01-01
A review of the rigorous coupled-wave analysis as applied to the diffraction of electro-magnetic waves by gratings is presented. The analysis is valid for any polarization, angle of incidence, and conical diffraction. Cascaded and/or multiplexed gratings as well as material anisotropy can be incorporated under the same formalism. Small period rectangular groove gratings can also be modeled using approximately equivalent uniaxial homogeneous layers (effective media). The ordinary and extraordinary refractive indices of these layers depend on the gratings filling factor, the refractive indices of the substrate and superstrate, and the ratio of the freespace wavelength to grating period. Comparisons of the homogeneous effective medium approximations with the rigorous coupled-wave analysis are presented. Antireflection designs (single-layer or multilayer) using the effective medium models are presented and compared. These ultra-short period antireflection gratings can also be used to produce soft x-rays. Comparisons of the rigorous coupled-wave analysis with experimental results on soft x-ray generation by gratings are also included.
Radical-Driven Silicon Surface Passivation for Organic-Inorganic Hybrid Photovoltaics
NASA Astrophysics Data System (ADS)
Chandra, Nitish
The advent of metamaterials has increased the complexity of possible light-matter interactions, creating gaps in knowledge and violating various commonly used approximations and rendering some common mathematical frameworks incomplete. Our forward scattering experiments on metallic shells and cavities have created a need for a rigorous geometry-based analysis of scattering problems and more rigorous current distribution descriptions in the volume of the scattering object. In order to build an accurate understanding of these interactions, we have revisited the fundamentals of Maxwell's equations, electromagnetic potentials and boundary conditions to build a bottom-up geometry-based analysis of scattering. Individual structures or meta-atoms can be designed to localize the incident electromagnetic radiation in order to create a change in local constitutive parameters and possible nonlinear responses. Hence, in next generation engineered materials, an accurate determination of current distribution on the surface and in the structure's volume play an important role in describing and designing desired properties. Multipole expansions of the exact current distribution determined using principles of differential geometry provides an elegant way to study these local interactions of meta-atoms. The dynamics of the interactions can be studied using the behavior of the polarization and magnetization densities generated by localized current densities interacting with the electromagnetic potentials associated with the incident waves. The multipole method combined with propagation of electromagnetic potentials can be used to predict a large variety of linear and nonlinear physical phenomena. This has been demonstrated in experiments that enable the analog detection of sources placed at subwavelength separation by using time reversal of observed signals. Time reversal is accomplished by reversing the direction of the magnetic dipole in bianisotropic metasurfaces while simultaneously providing a method to reduce the losses often observed when light interacts with meta-structures.
A porous media theory for characterization of membrane blood oxygenation devices
NASA Astrophysics Data System (ADS)
Sano, Yoshihiko; Adachi, Jun; Nakayama, Akira
2013-07-01
A porous media theory has been proposed to characterize oxygen transport processes associated with membrane blood oxygenation devices. For the first time, a rigorous mathematical procedure based a volume averaging procedure has been presented to derive a complete set of the governing equations for the blood flow field and oxygen concentration field. As a first step towards a complete three-dimensional numerical analysis, one-dimensional steady case is considered to model typical membrane blood oxygenator scenarios, and to validate the derived equations. The relative magnitudes of oxygen transport terms are made clear, introducing a dimensionless parameter which measures the distance the oxygen gas travels to dissolve in the blood as compared with the blood dispersion length. This dimensionless number is found so large that the oxygen diffusion term can be neglected in most cases. A simple linear relationship between the blood flow rate and total oxygen transfer rate is found for oxygenators with sufficiently large membrane surface areas. Comparison of the one-dimensional analytic results and available experimental data reveals the soundness of the present analysis.
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
A Renormalisation Group Method. V. A Single Renormalisation Group Step
NASA Astrophysics Data System (ADS)
Brydges, David C.; Slade, Gordon
2015-05-01
This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.
NASA Technical Reports Server (NTRS)
Chen, Wei; Tsui, Kwok-Leung; Allen, Janet K.; Mistree, Farrokh
1994-01-01
In this paper we introduce a comprehensive and rigorous robust design procedure to overcome some limitations of the current approaches. A comprehensive approach is general enough to model the two major types of robust design applications, namely, robust design associated with the minimization of the deviation of performance caused by the deviation of noise factors (uncontrollable parameters), and robust design due to the minimization of the deviation of performance caused by the deviation of control factors (design variables). We achieve mathematical rigor by using, as a foundation, principles from the design of experiments and optimization. Specifically, we integrate the Response Surface Method (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example. Our focus in this paper is on illustrating our approach rather than on the results per se.
Using GIS to generate spatially balanced random survey designs for natural resource applications.
Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B
2007-07-01
Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.
Shear-induced opening of the coronal magnetic field
NASA Technical Reports Server (NTRS)
Wolfson, Richard
1995-01-01
This work describes the evolution of a model solar corona in response to motions of the footpoints of its magnetic field. The mathematics involved is semianalytic, with the only numerical solution being that of an ordinary differential equation. This approach, while lacking the flexibility and physical details of full MHD simulations, allows for very rapid computation along with complete and rigorous exploration of the model's implications. We find that the model coronal field bulges upward, at first slowly and then more dramatically, in response to footpoint displacements. The energy in the field rises monotonically from that of the initial potential state, and the field configuration and energy appraoch asymptotically that of a fully open field. Concurrently, electric currents develop and concentrate into a current sheet as the limiting case of the open field is approached. Examination of the equations shows rigorously that in the asymptotic limit of the fully open field, the current layer becomes a true ideal MHD singularity.
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-11-07
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic--mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Multiscale Simulation of Microbe Structure and Dynamics
Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V.; Cheluvaraja, Srinath C.; Ortoleva, Peter J.
2012-01-01
A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin. PMID:21802438
MAESTRO: Mathematics and Earth Science Teachers' Resource Organization
NASA Astrophysics Data System (ADS)
Courtier, A. M.; Pyle, E. J.; Fichter, L.; Lucas, S.; Jackson, A.
2013-12-01
The Mathematics and Earth Science Teachers' Resource Organization (MAESTRO) partnership between James Madison University and Harrisonburg City and Page County Public Schools, funded through NSF-GEO. The partnership aims to transform mathematics and Earth science instruction in middle and high schools by developing an integrated mathematics and Earth systems science approach to instruction. This curricular integration is intended to enhance the mathematical skills and confidence of students through concrete, Earth systems-based examples, while increasing the relevance and rigor of Earth science instruction via quantification and mathematical modeling of Earth system phenomena. MAESTRO draws heavily from the Earth Science Literacy Initiative (2009) and is informed by criterion-level standardized test performance data in both mathematics and Earth science. The project has involved two summer professional development workshops, academic year Lesson Study (structured teacher observation and reflection), and will incorporate site-based case studies with direct student involvement. Participating teachers include Grade 6 Science and Mathematics teachers, and Grade 9 Earth Science and Algebra teachers. It is anticipated that the proposed integration across grade bands will first strengthen students' interests in mathematics and science (a problem in middle school) and subsequently reinforce the relevance of mathematics and other sciences (a problem in high school), both in support of Earth systems literacy. MAESTRO's approach to the integration of math and science focuses on using box models to emphasize the interconnections among the geo-, atmo-, bio-, and hydrospheres, and demonstrates the positive and negative feedback processes that connect their mutual evolution. Within this framework we explore specific relationships that can be described both qualitatively and mathematically, using mathematical operations appropriate for each grade level. Site-based case studies, developed in collaboration between teachers and JMU faculty members, provide a tangible, relevant setting in which students can apply and understand mathematical applications and scientific processes related to evolving Earth systems. Initial results from student questionnaires and teacher focus groups suggest that the anticipated impacts of MAESTRO on students are being realized, including increased valuing of mathematics and Earth science in society and transfer between mathematics and science courses. As a high percentage of students in the MAESTRO schools are of low socio-economic status, they also face the prospect of becoming first-generation college students, hopefully considering STEM academic pathways. MAESTRO will drive the development of challenging and engaging instruction designed to draw a larger pool of students into STEM career pathways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J.D.
1994-08-04
This report is divided into two parts. The second part is divided into the following sections: experimental protocol; modeling the hollow fiber extractor using film theory; Graetz model of the hollow fiber membrane process; fundamental diffusive-kinetic model; and diffusive liquid membrane device-a rigorous model. The first part is divided into: membrane and membrane process-a concept; metal extraction; kinetics of metal extraction; modeling the membrane contactor; and interfacial phenomenon-boundary conditions-applied to membrane transport.
Curved fronts in the Belousov-Zhabotinskii reaction-diffusion systems in R2
NASA Astrophysics Data System (ADS)
Niu, Hong-Tao; Wang, Zhi-Cheng; Bu, Zhen-Hui
2018-05-01
In this paper we consider a diffusion system with the Belousov-Zhabotinskii (BZ for short) chemical reaction. Following Brazhnik and Tyson [4] and Pérez-Muñuzuri et al. [45], who predicted V-shaped fronts theoretically and discovered V-shaped fronts by experiments respectively, we give a rigorous mathematical proof of their results. We establish the existence of V-shaped traveling fronts in R2 by constructing a proper supersolution and a subsolution. Furthermore, we establish the stability of the V-shaped front in R2.
Scaling Limit for a Generalization of the Nelson Model and its Application to Nuclear Physics
NASA Astrophysics Data System (ADS)
Suzuki, Akito
We study a mathematically rigorous derivation of a quantum mechanical Hamiltonian in a general framework. We derive such a Hamiltonian by taking a scaling limit for a generalization of the Nelson model, which is an abstract interaction model between particles and a Bose field with some internal degrees of freedom. Applying it to a model for the field of the nuclear force with isospins, we obtain a Schrödinger Hamiltonian with a matrix-valued potential, the one pion exchange potential, describing an effective interaction between nucleons.
Ontology-Driven Information Integration
NASA Technical Reports Server (NTRS)
Tissot, Florence; Menzel, Chris
2005-01-01
Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.
A review of the meteorological parameters which affect aerial application
NASA Technical Reports Server (NTRS)
Christensen, L. S.; Frost, W.
1979-01-01
The ambient wind field and temperature gradient were found to be the most important parameters. Investigation results indicated that the majority of meteorological parameters affecting dispersion were interdependent and the exact mechanism by which these factors influence the particle dispersion was largely unknown. The types and approximately ranges of instrumented capabilities for a systematic study of the significant meteorological parameters influencing aerial applications were defined. Current mathematical dispersion models were also briefly reviewed. Unfortunately, a rigorous dispersion model which could be applied to aerial application was not available.
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2012
2012-01-01
In the advent of the development and mass adoption of the common core state standards for English language arts and mathematics, state and local agencies have now expressed a need to the Council of Chief State School Officers (CCSSO or the Council) for assistance as they upgrade existing social studies standards to meet the practical goal of…
Computational fluid dynamics: Transition to design applications
NASA Technical Reports Server (NTRS)
Bradley, R. G.; Bhateley, I. C.; Howell, G. A.
1987-01-01
The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
Inflammation and immune system activation in aging: a mathematical approach.
Nikas, Jason B
2013-11-19
Memory and learning declines are consequences of normal aging. Since those functions are associated with the hippocampus, I analyzed the global gene expression data from post-mortem hippocampal tissue of 25 old (age ≥ 60 yrs) and 15 young (age ≤ 45 yrs) cognitively intact human subjects. By employing a rigorous, multi-method bioinformatic approach, I identified 36 genes that were the most significant in terms of differential expression; and by employing mathematical modeling, I demonstrated that 7 of the 36 genes were able to discriminate between the old and young subjects with high accuracy. Remarkably, 90% of the known genes from those 36 most significant genes are associated with either inflammation or immune system activation. This suggests that chronic inflammation and immune system over-activity may underlie the aging process of the human brain, and that potential anti-inflammatory treatments targeting those genes may slow down this process and alleviate its symptoms.
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.; Woo, Alex C.; Yu, C. Long
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This is due to the lack of rigorous mathematical models for conformal antenna arrays, and as a result the design of conformal arrays is primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. Herewith we shall extend this formulation for conformal arrays on large metallic cylinders. In this we develop the mathematical formulation. In particular we discuss the finite element equations, the shape elements, and the boundary integral evaluation, and it is shown how this formulation can be applied with minimal computation and memory requirements. The implementation shall be discussed in a later report.
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This was due to the lack of rigorous mathematical models for conformal antenna arrays. As a result, the design of conformal arrays was primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We are extending this formulation to conformal arrays on large metallic cylinders. In doing so, we will develop a mathematical formulation. In particular, we discuss the finite element equations, the shape elements, and the boundary integral evaluation. It is shown how this formulation can be applied with minimal computation and memory requirements.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Sanat K.; Das, Prasanta K., E-mail: pkd@mech.iitkgp.ernet.in; Maiti, Biswajit
2015-03-14
A rigorous thermodynamic formulation of the geometric model for heterogeneous nucleation including line tension effect is missing till date due to the associated mathematical hurdles. In this work, we develop a novel thermodynamic formulation based on Classical Nucleation Theory (CNT), which is supposed to illustrate a systematic and a more plausible analysis for the heterogeneous nucleation on a planar surface including the line tension effect. The appreciable range of the critical microscopic contact angle (θ{sub c}), obtained from the generalized Young’s equation and the stability analysis, is θ{sub ∞} < θ{sub c} < θ′ for positive line tension and ismore » θ{sub M} < θ{sub c} < θ{sub ∞} for negative line tension. θ{sub ∞} is the macroscopic contact angle, θ′ is the contact angle for which the Helmholtz free energy has the minimum value for the positive line tension, and θ{sub M} is the local minima of the nondimensional line tension effect for the negative line tension. The shape factor f, which is basically the dimensionless critical free energy barrier, becomes higher for lower values of θ{sub ∞} and higher values of θ{sub c} for positive line tension. The combined effect due to the presence of the triple line and the interfacial areas (f{sup L} + f{sup S}) in shape factor is always within (0, 3.2), resulting f in the range of (0, 1.7) for positive line tension. A formerly presumed appreciable range for θ{sub c}(0 < θ{sub c} < θ{sub ∞}) is found not to be true when the effect of negative line tension is considered for CNT. Estimation based on the property values of some real fluids confirms the relevance of the present analysis.« less
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J
2017-12-01
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L
Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less
Double Dutch: A Tool for Designing Combinatorial Libraries of Biological Systems.
Roehner, Nicholas; Young, Eric M; Voigt, Christopher A; Gordon, D Benjamin; Densmore, Douglas
2016-06-17
Recently, semirational approaches that rely on combinatorial assembly of characterized DNA components have been used to engineer biosynthetic pathways. In practice, however, it is not practical to assemble and test millions of pathway variants in order to elucidate how different DNA components affect the behavior of a pathway. To address this challenge, we apply a rigorous mathematical approach known as design of experiments (DOE) that can be used to construct empirical models of system behavior without testing all variants. To support this approach, we have developed a tool named Double Dutch, which uses a formal grammar and heuristic algorithms to automate the process of DOE library design. Compared to designing by hand, Double Dutch enables users to more efficiently and scalably design libraries of pathway variants that can be used in a DOE framework and uniquely provides a means to flexibly balance design considerations of statistical analysis, construction cost, and risk of homologous recombination, thereby demonstrating the utility of automating decision making when faced with complex design trade-offs.
Finite machines, mental procedures, and modern physics.
Lupacchini, Rossella
2007-01-01
A Turing machine provides a mathematical definition of the natural process of calculating. It rests on trust that a procedure of reason can be reproduced mechanically. Turing's analysis of the concept of mechanical procedure in terms of a finite machine convinced Gödel of the validity of the Church thesis. And yet, Gödel's later concern was that, insofar as Turing's work shows that "mental procedure cannot go beyond mechanical procedures", it would imply the same kind of limitation on human mind. He therefore deems Turing's argument to be inconclusive. The question then arises as to which extent a computing machine operating by finite means could provide an adequate model of human intelligence. It is argued that a rigorous answer to this question can be given by developing Turing's considerations on the nature of mental processes. For Turing such processes are the consequence of physical processes and he seems to be led to the conclusion that quantum mechanics could help to find a more comprehensive explanation of them.
Liao, Xing; Xie, Yan-ming
2014-10-01
The impact of evidence-based medicine and clinical epidemiology on clinical research has contributed to the development of Chinese medicine in modern times over the past two decades. Many concepts and methods of modern science and technology are emerging in Chinese medicine research, resulting in constant progress. Systematic reviews, randomized controlled trials and other advanced mathematic approaches and statistical analysis methods have brought reform to Chinese medicine. In this new era, Chinese medicine researchers have many opportunities and challenges. On the one hand, Chinese medicine researchers need to dedicate themselves to providing enough evidence to the world through rigorous studies, whilst on the other hand, they also need to keep up with the speed of modern medicine research. For example, recently, real world study, comparative effectiveness research, propensity score techniques and registry study have emerged. This article aims to inspire Chinese medicine researchers to explore new areas by introducing these new ideas and new techniques.
An Iterative Time Windowed Signature Algorithm for Time Dependent Transcription Module Discovery
Meng, Jia; Gao, Shou-Jiang; Huang, Yufei
2010-01-01
An algorithm for the discovery of time varying modules using genome-wide expression data is present here. When applied to large-scale time serious data, our method is designed to discover not only the transcription modules but also their timing information, which is rarely annotated by the existing approaches. Rather than assuming commonly defined time constant transcription modules, a module is depicted as a set of genes that are co-regulated during a specific period of time, i.e., a time dependent transcription module (TDTM). A rigorous mathematical definition of TDTM is provided, which is serve as an objective function for retrieving modules. Based on the definition, an effective signature algorithm is proposed that iteratively searches the transcription modules from the time series data. The proposed method was tested on the simulated systems and applied to the human time series microarray data during Kaposi's sarcoma-associated herpesvirus (KSHV) infection. The result has been verified by Expression Analysis Systematic Explorer. PMID:21552463
Chizhik, Stanislav; Sidelnikov, Anatoly; Zakharov, Boris; Naumov, Panče; Boldyreva, Elena
2018-02-28
Photomechanically reconfigurable elastic single crystals are the key elements for contactless, timely controllable and spatially resolved transduction of light into work from the nanoscale to the macroscale. The deformation in such single-crystal actuators is observed and usually attributed to anisotropy in their structure induced by the external stimulus. Yet, the actual intrinsic and external factors that affect the mechanical response remain poorly understood, and the lack of rigorous models stands as the main impediment towards benchmarking of these materials against each other and with much better developed soft actuators based on polymers, liquid crystals and elastomers. Here, experimental approaches for precise measurement of macroscopic strain in a single crystal bent by means of a solid-state transformation induced by light are developed and used to extract the related temperature-dependent kinetic parameters. The experimental results are compared against an overarching mathematical model based on the combined consideration of light transport, chemical transformation and elastic deformation that does not require fitting of any empirical information. It is demonstrated that for a thermally reversible photoreactive bending crystal, the kinetic constants of the forward (photochemical) reaction and the reverse (thermal) reaction, as well as their temperature dependence, can be extracted with high accuracy. The improved kinematic model of crystal bending takes into account the feedback effect, which is often neglected but becomes increasingly important at the late stages of the photochemical reaction in a single crystal. The results provide the most rigorous and exact mathematical description of photoinduced bending of a single crystal to date.
Rigorous coupled wave analysis of acousto-optics with relativistic considerations.
Xia, Guoqiang; Zheng, Weijian; Lei, Zhenggang; Zhang, Ruolan
2015-09-01
A relativistic analysis of acousto-optics is presented, and a rigorous coupled wave analysis is generalized for the diffraction of the acousto-optical effect. An acoustic wave generates a grating with temporally and spatially modulated permittivity, hindering direct applications of the rigorous coupled wave analysis for the acousto-optical effect. In a reference frame which moves with the acoustic wave, the grating is static, the medium moves, and the coupled wave equations for the static grating may be derived. Floquet's theorem is then applied to cast these equations into an eigenproblem. Using a Lorentz transformation, the electromagnetic fields in the grating region are transformed to the lab frame where the medium is at rest, and relativistic Doppler frequency shifts are introduced into various diffraction orders. In the lab frame, the boundary conditions are considered and the diffraction efficiencies of various orders are determined. This method is rigorous and general, and the plane waves in the resulting expansion satisfy the dispersion relation of the medium and are propagation modes. Properties of various Bragg diffractions are results, rather than preconditions, of this method. Simulations of an acousto-optical tunable filter made by paratellurite, TeO(2), are given as examples.
Sukumaran, Anuraj T; Holtcamp, Alexander J; Campbell, Yan L; Burnett, Derris; Schilling, Mark W; Dinh, Thu T N
2018-06-07
The objective of this study was to determine the effects of deboning time (pre- and post-rigor), processing steps (grinding - GB; salting - SB; batter formulation - BB), and storage time on the quality of raw beef mixtures and vacuum-packaged cooked sausage, produced using a commercial formulation with 0.25% phosphate. The pH was greater in pre-rigor GB and SB than in post-rigor GB and SB (P < .001). However, deboning time had no effect on metmyoglobin reducing activity, cooking loss, and color of raw beef mixtures. Protein solubility of pre-rigor beef mixtures (124.26 mg/kg) was greater than that of post-rigor beef (113.93 mg/kg; P = .071). TBARS were increased in BB but decreased during vacuum storage of cooked sausage (P ≤ .018). Except for chewiness and saltiness being 52.9 N-mm and 0.3 points greater in post-rigor sausage (P = .040 and 0.054, respectively), texture profile analysis and trained panelists detected no difference in texture between pre- and post-rigor sausage. Published by Elsevier Ltd.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang
2016-10-03
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.
2016-01-01
Information is a precise concept that can be defined mathematically, but its relationship to what we call ‘knowledge’ is not always made clear. Furthermore, the concepts ‘entropy’ and ‘information’, while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology. PMID:26857663
Quantum probability and quantum decision-making.
Yukalov, V I; Sornette, D
2016-01-13
A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang
2018-04-01
In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.
Gravitation. [Book on general relativity
NASA Technical Reports Server (NTRS)
Misner, C. W.; Thorne, K. S.; Wheeler, J. A.
1973-01-01
This textbook on gravitation physics (Einstein's general relativity or geometrodynamics) is designed for a rigorous full-year course at the graduate level. The material is presented in two parallel tracks in an attempt to divide key physical ideas from more complex enrichment material to be selected at the discretion of the reader or teacher. The full book is intended to provide competence relative to the laws of physics in flat space-time, Einstein's geometric framework for physics, applications with pulsars and neutron stars, cosmology, the Schwarzschild geometry and gravitational collapse, gravitational waves, experimental tests of Einstein's theory, and mathematical concepts of differential geometry.
Lectures on General Relativity, Cosmology and Quantum Black Holes
NASA Astrophysics Data System (ADS)
Ydri, Badis
2017-07-01
This book is a rigorous text for students in physics and mathematics requiring an introduction to the implications and interpretation of general relativity in areas of cosmology. Readers of this text will be well prepared to follow the theoretical developments in the field and undertake research projects as part of an MSc or PhD programme. This ebook contains interactive Q&A technology, allowing the reader to interact with the text and reveal answers to selected exercises posed by the author within the book. This feature may not function in all formats and on reading devices.
Experimental Demonstration of Observability and Operability of Robustness of Coherence
NASA Astrophysics Data System (ADS)
Zheng, Wenqiang; Ma, Zhihao; Wang, Hengyan; Fei, Shao-Ming; Peng, Xinhua
2018-06-01
Quantum coherence is an invaluable physical resource for various quantum technologies. As a bona fide measure in quantifying coherence, the robustness of coherence (ROC) is not only mathematically rigorous, but also physically meaningful. We experimentally demonstrate the witness-observable and operational feature of the ROC in a multiqubit nuclear magnetic resonance system. We realize witness measurements by detecting the populations of quantum systems in one trial. The approach may also apply to physical systems compatible with ensemble or nondemolition measurements. Moreover, we experimentally show that the ROC quantifies the advantage enabled by a quantum state in a phase discrimination task.
Mathematics of pulsed vocalizations with application to killer whale biphonation.
Brown, Judith C
2008-05-01
Formulas for the spectra of pulsed vocalizations for both the continuous and discrete cases are rigorously derived from basic formulas for Fourier analysis, a topic discussed qualitatively in Watkins' classic paper on "the harmonic interval" ["The harmonic interval: Fact or artifact in spectral analysis of pulse trains," in Marine Bioacoustics 2, edited by W. N. Tavogla (Pergamon, New York, 1967), pp. 15-43]. These formulas are summarized in a table for easy reference, along with most of the corresponding graphs. The case of a "pulse tone" is shown to involve multiplication of two temporal wave forms, corresponding to convolution in the frequency domain. This operation is discussed in detail and shown to be equivalent to a simpler approach using a trigonometric formula giving sum and difference frequencies. The presence of a dc component in the temporal wave form, which implies physically that there is a net positive pressure at the source, is discussed, and examples of the corresponding spectra are calculated and shown graphically. These have application to biphonation (two source signals) observed for some killer whale calls and implications for a source mechanism. A MATLAB program for synthesis of a similar signal is discussed and made available online.
A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations
NASA Astrophysics Data System (ADS)
Lou, Yijun; Zhao, Xiao-Qiang
2017-04-01
There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Gibiansky, Leonid; Gibiansky, Ekaterina
2018-02-01
The emerging discipline of mathematical pharmacology occupies the space between advanced pharmacometrics and systems biology. A characteristic feature of the approach is application of advance mathematical methods to study the behavior of biological systems as described by mathematical (most often differential) equations. One of the early application of mathematical pharmacology (that was not called this name at the time) was formulation and investigation of the target-mediated drug disposition (TMDD) model and its approximations. The model was shown to be remarkably successful, not only in describing the observed data for drug-target interactions, but also in advancing the qualitative and quantitative understanding of those interactions and their role in pharmacokinetic and pharmacodynamic properties of biologics. The TMDD model in its original formulation describes the interaction of the drug that has one binding site with the target that also has only one binding site. Following the framework developed earlier for drugs with one-to-one binding, this work aims to describe a rigorous approach for working with similar systems and to apply it to drugs that bind to targets with two binding sites. The quasi-steady-state, quasi-equilibrium, irreversible binding, and Michaelis-Menten approximations of the model are also derived. These equations can be used, in particular, to predict concentrations of the partially bound target (RC). This could be clinically important if RC remains active and has slow internalization rate. In this case, introduction of the drug aimed to suppress target activity may lead to the opposite effect due to RC accumulation.
Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.
Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P
2018-03-03
Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.
Monitoring muscle optical scattering properties during rigor mortis
NASA Astrophysics Data System (ADS)
Xia, J.; Ranasinghesagara, J.; Ku, C. W.; Yao, G.
2007-09-01
Sarcomere is the fundamental functional unit in skeletal muscle for force generation. In addition, sarcomere structure is also an important factor that affects the eating quality of muscle food, the meat. The sarcomere structure is altered significantly during rigor mortis, which is the critical stage involved in transforming muscle to meat. In this paper, we investigated optical scattering changes during the rigor process in Sternomandibularis muscles. The measured optical scattering parameters were analyzed along with the simultaneously measured passive tension, pH value, and histology analysis. We found that the temporal changes of optical scattering, passive tension, pH value and fiber microstructures were closely correlated during the rigor process. These results suggested that sarcomere structure changes during rigor mortis can be monitored and characterized by optical scattering, which may find practical applications in predicting meat quality.
Multi-Disciplinary Knowledge Synthesis for Human Health Assessment on Earth and in Space
NASA Astrophysics Data System (ADS)
Christakos, G.
We discuss methodological developments in multi-disciplinary knowledge synthesis (KS) of human health assessment. A theoretical KS framework can provide the rational means for the assimilation of various information bases (general, site-specific etc.) that are relevant to the life system of interest. KS-based techniques produce a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, and generate informative health state predictions across space-time. The underlying epistemic cognition methodology is based on teleologic criteria and stochastic logic principles. The mathematics of KS involves a powerful and versatile spatiotemporal random field model that accounts rigorously for the uncertainty features of the life system and imposes no restriction on the shape of the probability distributions or the form of the predictors. KS theory is instrumental in understanding natural heterogeneities, assessing crucial human exposure correlations and laws of physical change, and explaining toxicokinetic mechanisms and dependencies in a spatiotemporal life system domain. It is hoped that a better understanding of KS fundamentals would generate multi-disciplinary models that are useful for the maintenance of human health on Earth and in Space.
Bayly, Philip V.; Wilson, Kate S.
2014-01-01
The motion of flagella and cilia arises from the coordinated activity of dynein motor protein molecules arrayed along microtubule doublets that span the length of axoneme (the flagellar cytoskeleton). Dynein activity causes relative sliding between the doublets, which generates propulsive bending of the flagellum. The mechanism of dynein coordination remains incompletely understood, although it has been the focus of many studies, both theoretical and experimental. In one leading hypothesis, known as the geometric clutch (GC) model, local dynein activity is thought to be controlled by interdoublet separation. The GC model has been implemented as a numerical simulation in which the behavior of a discrete set of rigid links in viscous fluid, driven by active elements, was approximated using a simplified time-marching scheme. A continuum mechanical model and associated partial differential equations of the GC model have remained lacking. Such equations would provide insight into the underlying biophysics, enable mathematical analysis of the behavior, and facilitate rigorous comparison to other models. In this article, the equations of motion for the flagellum and its doublets are derived from mechanical equilibrium principles and simple constitutive models. These equations are analyzed to reveal mechanisms of wave propagation and instability in the GC model. With parameter values in the range expected for Chlamydomonas flagella, solutions to the fully nonlinear equations closely resemble observed waveforms. These results support the ability of the GC hypothesis to explain dynein coordination in flagella and provide a mathematical foundation for comparison to other leading models. PMID:25296329
Bondarenko, Vladimir E; Cymbalyuk, Gennady S; Patel, Girish; Deweerth, Stephen P; Calabrese, Ronald L
2004-12-01
Oscillatory activity in the central nervous system is associated with various functions, like motor control, memory formation, binding, and attention. Quasiperiodic oscillations are rarely discussed in the neurophysiological literature yet they may play a role in the nervous system both during normal function and disease. Here we use a physical system and a model to explore scenarios for how quasiperiodic oscillations might arise in neuronal networks. An oscillatory system of two mutually inhibitory neuronal units is a ubiquitous network module found in nervous systems and is called a half-center oscillator. Previously we created a half-center oscillator of two identical oscillatory silicon (analog Very Large Scale Integration) neurons and developed a mathematical model describing its dynamics. In the mathematical model, we have shown that an in-phase limit cycle becomes unstable through a subcritical torus bifurcation. However, the existence of this torus bifurcation in experimental silicon two-neuron system was not rigorously demonstrated or investigated. Here we demonstrate the torus predicted by the model for the silicon implementation of a half-center oscillator using complex time series analysis, including bifurcation diagrams, mapping techniques, correlation functions, amplitude spectra, and correlation dimensions, and we investigate how the properties of the quasiperiodic oscillations depend on the strengths of coupling between the silicon neurons. The potential advantages and disadvantages of quasiperiodic oscillations (torus) for biological neural systems and artificial neural networks are discussed.
A case of instantaneous rigor?
Pirch, J; Schulz, Y; Klintschar, M
2013-09-01
The question of whether instantaneous rigor mortis (IR), the hypothetic sudden occurrence of stiffening of the muscles upon death, actually exists has been controversially debated over the last 150 years. While modern German forensic literature rejects this concept, the contemporary British literature is more willing to embrace it. We present the case of a young woman who suffered from diabetes and who was found dead in an upright standing position with back and shoulders leaned against a punchbag and a cupboard. Rigor mortis was fully established, livor mortis was strong and according to the position the body was found in. After autopsy and toxicological analysis, it was stated that death most probably occurred due to a ketoacidotic coma with markedly increased values of glucose and lactate in the cerebrospinal fluid as well as acetone in blood and urine. Whereas the position of the body is most unusual, a detailed analysis revealed that it is a stable position even without rigor mortis. Therefore, this case does not further support the controversial concept of IR.
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Bipotential continuum models for granular mechanics
NASA Astrophysics Data System (ADS)
Goddard, Joe
2014-03-01
Most currently popular continuum models for granular media are special cases of a generalized Maxwell fluid model, which describes the evolution of stress and internal variables such as granular particle fraction and fabric,in terms of imposed strain rate. It is shown how such models can be obtained from two scalar potentials, a standard elastic free energy and a ``dissipation potential'' given rigorously by the mathematical theory of Edelen. This allows for a relatively easy derivation of properly invariant continuum models for granular media and fluid-particle suspensions within a thermodynamically consistent framework. The resulting continuum models encompass all the prominent regimes of granular flow, ranging from the quasi-static to rapidly sheared, and are readily extended to include higher-gradient or Cosserat effects. Models involving stress diffusion, such as that proposed recently by Kamrin and Koval (PRL 108 178301), provide an alternative approach that is mentioned in passing. This paper provides a brief overview of a forthcoming review articles by the speaker (The Princeton Companion to Applied Mathematics, and Appl. Mech. Rev.,in the press, 2013).
Crisis in science: in search for new theoretical foundations.
Schroeder, Marcin J
2013-09-01
Recognition of the need for theoretical biology more than half century ago did not bring substantial progress in this direction. Recently, the need for new methods in science, including physics became clear. The breakthrough should be sought in answering the question "What is life?", which can help to explain the mechanisms of consciousness and consequently give insight into the way we comprehend reality. This could help in the search for new methods in the study of both physical and biological phenomena. However, to achieve this, new theoretical discipline will have to be developed with a very general conceptual framework and rigor of mathematical reasoning, allowing it to assume the leading role in science. Since its foundations are in the recognition of the role of life and consciousness in the epistemic process, it could be called biomathics. The prime candidates proposed here for being the fundamental concepts for biomathics are 'information' and 'information integration', with an appropriately general mathematical formalism. Copyright © 2013 Elsevier Ltd. All rights reserved.
Unique geologic insights from "non-unique" gravity and magnetic interpretation
Saltus, R.W.; Blakely, R.J.
2011-01-01
Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are always possible. The rigorous mathematical label of "nonuniqueness" can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this article is to present a practical perspective on the theoretical non-uniqueness of potential-field interpretation in geology. There are multiple ways to approach and constrain potential-field studies to produce significant, robust, and definitive results. The "non-uniqueness" of potential-field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.
A new mathematical formulation of the line-by-line method in case of weak line overlapping
NASA Technical Reports Server (NTRS)
Ishov, Alexander G.; Krymova, Natalie V.
1994-01-01
A rigorous mathematical proof is presented for multiline representation on the equivalent width of a molecular band which consists in the general case of n overlapping spectral lines. The multiline representation includes a principal term and terms of minor significance. The principal term is the equivalent width of the molecular band consisting of the same n nonoverlapping spectral lines. The terms of minor significance take into consideration the overlapping of two, three and more spectral lines. They are small in case of the weak overlapping of spectral lines in the molecular band. The multiline representation can be easily generalized for optically inhomogeneous gas media and holds true for combinations of molecular bands. If the band lines overlap weakly the standard formulation of line-by-line method becomes too labor-consuming. In this case the multiline representation permits line-by-line calculations to be performed more effectively. Other useful properties of the multiline representation are pointed out.
Quantum correlations and dynamics from classical random fields valued in complex Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
2010-08-15
One of the crucial differences between mathematical models of classical and quantum mechanics (QM) is the use of the tensor product of the state spaces of subsystems as the state space of the corresponding composite system. (To describe an ensemble of classical composite systems, one uses random variables taking values in the Cartesian product of the state spaces of subsystems.) We show that, nevertheless, it is possible to establish a natural correspondence between the classical and the quantum probabilistic descriptions of composite systems. Quantum averages for composite systems (including entangled) can be represented as averages with respect to classical randommore » fields. It is essentially what Albert Einstein dreamed of. QM is represented as classical statistical mechanics with infinite-dimensional phase space. While the mathematical construction is completely rigorous, its physical interpretation is a complicated problem. We present the basic physical interpretation of prequantum classical statistical field theory in Sec. II. However, this is only the first step toward real physical theory.« less
On decentralized design: Rationale, dynamics, and effects on decision-making
NASA Astrophysics Data System (ADS)
Chanron, Vincent
The focus of this dissertation is the design of complex systems, including engineering systems such as cars, airplanes, and satellites. Companies who design these systems are under constant pressure to design better products that meet customer expectations, and competition forces them to develop them faster. One of the responses of the industry to these conflicting challenges has been the decentralization of the design responsibilities. The current lack of understanding of the dynamics of decentralized design processes is the main motivation for this research, and places value on the descriptive base. It identifies the main reasons and the true benefits for companies to decentralize the design of their products. It also demonstrates the limitations of this approach by listing the relevant issues and problems created by the decentralization of decisions. Based on these observations, a game-theoretic approach to decentralized design is proposed to model the decisions made during the design process. The dynamics are modeled using mathematical formulations inspired from control theory. Building upon this formalism, the issue of convergence in decentralized design is analyzed: the equilibrium points of the design space are identified and convergent and divergent patterns are recognized. This rigorous investigation of the design process provides motivation and support for proposing new approaches to decentralized design problems. Two methods are developed, which aim at improving the design process in two ways: decreasing the product development time, and increasing the optimality of the final design. The frame of these methods are inspired by eigenstructure decomposition and set-based design, respectively. The value of the research detailed within this dissertation is in the proposed methods which are built upon the sound mathematical formalism developed. The contribution of this work is two fold: rigorous investigation of the design process, and practical support to decision-making in decentralized environments.
Methodological Developments in Geophysical Assimilation Modeling
NASA Astrophysics Data System (ADS)
Christakos, George
2005-06-01
This work presents recent methodological developments in geophysical assimilation research. We revisit the meaning of the term "solution" of a mathematical model representing a geophysical system, and we examine its operational formulations. We argue that an assimilation solution based on epistemic cognition (which assumes that the model describes incomplete knowledge about nature and focuses on conceptual mechanisms of scientific thinking) could lead to more realistic representations of the geophysical situation than a conventional ontologic assimilation solution (which assumes that the model describes nature as is and focuses on form manipulations). Conceptually, the two approaches are fundamentally different. Unlike the reasoning structure of conventional assimilation modeling that is based mainly on ad hoc technical schemes, the epistemic cognition approach is based on teleologic criteria and stochastic adaptation principles. In this way some key ideas are introduced that could open new areas of geophysical assimilation to detailed understanding in an integrated manner. A knowledge synthesis framework can provide the rational means for assimilating a variety of knowledge bases (general and site specific) that are relevant to the geophysical system of interest. Epistemic cognition-based assimilation techniques can produce a realistic representation of the geophysical system, provide a rigorous assessment of the uncertainty sources, and generate informative predictions across space-time. The mathematics of epistemic assimilation involves a powerful and versatile spatiotemporal random field theory that imposes no restriction on the shape of the probability distributions or the form of the predictors (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated) and accounts rigorously for the uncertainty features of the geophysical system. In the epistemic cognition context the assimilation concept may be used to investigate critical issues related to knowledge reliability, such as uncertainty due to model structure error (conceptual uncertainty).
Stochastic Geometry and Quantum Gravity: Some Rigorous Results
NASA Astrophysics Data System (ADS)
Zessin, H.
The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.
Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R
2017-10-23
The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling rigorous comparisons among the effects of different compounds and experimental approaches. In all experimental data fitted, the calculated parameters were always statistically significant, the equations proved to be consistent and the correlation coefficient of determination was, in most of the cases, higher than 0.98.
Response to Ridgeway, Dunston, and Qian: On Methodological Rigor: Has Rigor Mortis Set In?
ERIC Educational Resources Information Center
Baldwin, R. Scott; Vaughn, Sharon
1993-01-01
Responds to an article in the same issue of the journal presenting a meta-analysis of reading research. Expresses concern that the authors' conclusions will promote a slavish adherence to a methodology and a rigidity of thought that reading researchers can ill afford. (RS)
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2017-01-01
This review brings rigorous life cycle cost (LCC) analysis into discussions about COTS program costs. We gather publicly available cost data, review the data for credibility, check for consistency among sources, and rigorously define and analyze specific cost metrics.
Systemic Planning: An Annotated Bibliography and Literature Guide. Exchange Bibliography No. 91.
ERIC Educational Resources Information Center
Catanese, Anthony James
Systemic planning is an operational approach to using scientific rigor and qualitative judgment in a complementary manner. It integrates rigorous techniques and methods from systems analysis, cybernetics, decision theory, and work programing. The annotated reference sources in this bibliography include those works that have been most influential…
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
Validation of Fatigue Modeling Predictions in Aviation Operations
NASA Technical Reports Server (NTRS)
Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin
2017-01-01
Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.
Boundary-layer effects in composite laminates: Free-edge stress singularities, part 6
NASA Technical Reports Server (NTRS)
Wanag, S. S.; Choi, I.
1981-01-01
A rigorous mathematical model was obtained for the boundary-layer free-edge stress singularity in angleplied and crossplied fiber composite laminates. The solution was obtained using a method consisting of complex-variable stress function potentials and eigenfunction expansions. The required order of the boundary-layer stress singularity is determined by solving the transcendental characteristic equation obtained from the homogeneous solution of the partial differential equations. Numerical results obtained show that the boundary-layer stress singularity depends only upon material elastic constants and fiber orientation of the adjacent plies. For angleplied and crossplied laminates the order of the singularity is weak in general.
Higher order temporal finite element methods through mixed formalisms.
Kim, Jinkyu
2014-01-01
The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.
The new camera calibration system at the US Geological Survey
Light, D.L.
1992-01-01
Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author
NASA Astrophysics Data System (ADS)
Antsiferov, SV; Sammal, AS; Deev, PV
2018-03-01
To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.
Best packing of identical helices
NASA Astrophysics Data System (ADS)
Huh, Youngsik; Hong, Kyungpyo; Kim, Hyoungjun; No, Sungjong; Oh, Seungsang
2016-10-01
In this paper we prove the unique existence of a ropelength-minimizing conformation of the θ-spun double helix in a mathematically rigorous way, and find the minimal ropelength {{{Rop}}}* (θ )=-\\tfrac{8π }{t} where t is the unique solution in [-θ ,0] of the equation 2-2\\cos (t+θ )={t}2. Using this result, the pitch angles of the standard, triple and quadruple helices are around 39.3771^\\circ , 42.8354^\\circ and 43.8351^\\circ , respectively, which are almost identical with the approximated pitch angles of the zero-twist structures previously known by Olsen and Bohr. We also find the ropelength of the standard N-helix.
Interferometric millimeter wave and THz wave doppler radar
Liao, Shaolin; Gopalsami, Nachappa; Bakhtiari, Sasan; Raptis, Apostolos C.; Elmer, Thomas
2015-08-11
A mixerless high frequency interferometric Doppler radar system and methods has been invented, numerically validated and experimentally tested. A continuous wave source, phase modulator (e.g., a continuously oscillating reference mirror) and intensity detector are utilized. The intensity detector measures the intensity of the combined reflected Doppler signal and the modulated reference beam. Rigorous mathematics formulas have been developed to extract bot amplitude and phase from the measured intensity signal. Software in Matlab has been developed and used to extract such amplitude and phase information from the experimental data. Both amplitude and phase are calculated and the Doppler frequency signature of the object is determined.
High and low rigor temperature effects on sheep meat tenderness and ageing.
Devine, Carrick E; Payne, Steven R; Peachey, Bridget M; Lowe, Timothy E; Ingram, John R; Cook, Christian J
2002-02-01
Immediately after electrical stimulation, the paired m. longissimus thoracis et lumborum (LT) of 40 sheep were boned out and wrapped tightly with a polyethylene cling film. One of the paired LT's was chilled in 15°C air to reach a rigor mortis (rigor) temperature of 18°C and the other side was placed in a water bath at 35°C and achieved rigor at this temperature. Wrapping reduced rigor shortening and mimicked meat left on the carcass. After rigor, the meat was aged at 15°C for 0, 8, 26 and 72 h and then frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1×1 cm cross-section. The shear force values of meat for 18 and 35°C rigor were similar at zero ageing, but as ageing progressed, the 18 rigor meat aged faster and became more tender than meat that went into rigor at 35°C (P<0.001). The mean sarcomere length values of meat samples for 18 and 35°C rigor at each ageing time were significantly different (P<0.001), the samples at 35°C being shorter. When the short sarcomere length values and corresponding shear force values were removed for further data analysis, the shear force values for the 35°C rigor were still significantly greater. Thus the toughness of 35°C meat was not a consequence of muscle shortening and appears to be due to both a faster rate of tenderisation and the meat tenderising to a greater extent at the lower temperature. The cook loss at 35°C rigor (30.5%) was greater than that at 18°C rigor (28.4%) (P<0.01) and the colour Hunter L values were higher at 35°C (P<0.01) compared with 18°C, but there were no significant differences in a or b values.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
The Lévy flight paradigm: random search patterns and mechanisms.
Reynolds, A M; Rhodes, C J
2009-04-01
Over recent years there has been an accumulation of evidence from a variety of experimental, theoretical, and field studies that many organisms use a movement strategy approximated by Lévy flights when they are searching for resources. Lévy flights are random movements that can maximize the efficiency of resource searches in uncertain environments. This is a highly significant finding because it suggests that Lévy flights provide a rigorous mathematical basis for separating out evolved, innate behaviors from environmental influences. We discuss recent developments in random-search theory, as well as the many different experimental and data collection initiatives that have investigated search strategies. Methods for trajectory construction and robust data analysis procedures are presented. The key to prediction and understanding does, however, lie in the elucidation of mechanisms underlying the observed patterns. We discuss candidate neurological, olfactory, and learning mechanisms for the emergence of Lévy flight patterns in some organisms, and note that convergence of behaviors along such different evolutionary pathways is not surprising given the energetic efficiencies that Lévy flight movement patterns confer.
Characterization of the geometry and topology of DNA pictured as a discrete collection of atoms
Olson, Wilma K.
2014-01-01
The structural and physical properties of DNA are closely related to its geometry and topology. The classical mathematical treatment of DNA geometry and topology in terms of ideal smooth space curves was not designed to characterize the spatial arrangements of atoms found in high-resolution and simulated double-helical structures. We present here new and rigorous numerical methods for the rapid and accurate assessment of the geometry and topology of double-helical DNA structures in terms of the constituent atoms. These methods are well designed for large DNA datasets obtained in detailed numerical simulations or determined experimentally at high-resolution. We illustrate the usefulness of our methodology by applying it to the analysis of three canonical double-helical DNA chains, a 65-bp minicircle obtained in recent molecular dynamics simulations, and a crystallographic array of protein-bound DNA duplexes. Although we focus on fully base-paired DNA structures, our methods can be extended to treat the geometry and topology of melted DNA structures as well as to characterize the folding of arbitrary molecules such as RNA and cyclic peptides. PMID:24791158
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
Modeling Pilot State in Next Generation Aircraft Alert Systems
NASA Technical Reports Server (NTRS)
Carlin, Alan S.; Alexander, Amy L.; Schurr, Nathan
2011-01-01
The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit that must convey a large number of potentially complex alerts. Our work focuses on the challenges associated with prioritizing aircraft sensor alerts in a quick and efficient manner, essentially determining when and how to alert the pilot This "alert decision" becomes very difficult in NextGen due to the following challenges: 1) the increasing number of potential hazards, 2) the uncertainty associated with the state of potential hazards as well as pilot slate , and 3) the limited time to make safely-critical decisions. In this paper, we focus on pilot state and present a model for anticipating duration and quality of pilot behavior, for use in a larger system which issues aircraft alerts. We estimate pilot workload, which we model as being dependent on factors including mental effort, task demands. and task performance. We perform a mathematically rigorous analysis of the model and resulting alerting plans. We simulate the model in software and present simulated results with respect to manipulation of the pilot measures.
Space radiator simulation system analysis
NASA Technical Reports Server (NTRS)
Black, W. Z.; Wulff, W.
1972-01-01
A transient heat transfer analysis was carried out on a space radiator heat rejection system exposed to an arbitrarily prescribed combination of aerodynamic heating, solar, albedo, and planetary radiation. A rigorous analysis was carried out for the radiation panel and tubes lying in one plane and an approximate analysis was used to extend the rigorous analysis to the case of a curved panel. The analysis permits the consideration of both gaseous and liquid coolant fluids, including liquid metals, under prescribed, time dependent inlet conditions. The analysis provided a method for predicting: (1) transient and steady-state, two dimensional temperature profiles, (2) local and total heat rejection rates, (3) coolant flow pressure in the flow channel, and (4) total system weight and protection layer thickness.
Geometrically Nonlinear Static Analysis of 3D Trusses Using the Arc-Length Method
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2006-01-01
Rigorous analysis of geometrically nonlinear structures demands creating mathematical models that accurately include loading and support conditions and, more importantly, model the stiffness and response of the structure. Nonlinear geometric structures often contain critical points with snap-through behavior during the response to large loads. Studying the post buckling behavior during a portion of a structure's unstable load history may be necessary. Primary structures made from ductile materials will stretch enough prior to failure for loads to redistribute producing sudden and often catastrophic collapses that are difficult to predict. The responses and redistribution of the internal loads during collapses and possible sharp snap-back of structures have frequently caused numerical difficulties in analysis procedures. The presence of critical stability points and unstable equilibrium paths are major difficulties that numerical solutions must pass to fully capture the nonlinear response. Some hurdles still exist in finding nonlinear responses of structures under large geometric changes. Predicting snap-through and snap-back of certain structures has been difficult and time consuming. Also difficult is finding how much load a structure may still carry safely. Highly geometrically nonlinear responses of structures exhibiting complex snap-back behavior are presented and analyzed with a finite element approach. The arc-length method will be reviewed and shown to predict the proper response and follow the nonlinear equilibrium path through limit points.
The comet moment as a measure of DNA damage in the comet assay.
Kent, C R; Eady, J J; Ross, G M; Steel, G G
1995-06-01
The development of rapid assays of radiation-induced DNA damage requires the definition of reliable parameters for the evaluation of dose-response relationships to compare with cellular endpoints. We have used the single-cell gel electrophoresis (SCGE) or 'comet' assay to measure DNA damage in individual cells after irradiation. Both the alkaline and neutral protocols were used. In both cases, DNA was stained with ethidium bromide and viewed using a fluorescence microscope at 516-560 nm. Images of comets were stored as 512 x 512 pixel images using OPTIMAS, an image analysis software package. Using this software we tested various parameters for measuring DNA damage. We have developed a method of analysis that rigorously conforms to the mathematical definition of the moment of inertia of a plane figure. This parameter does not require the identification of separate head and tail regions, but rather calculates a moment of the whole comet image. We have termed this parameter 'comet moment'. This method is simple to calculate and can be performed using most image analysis software packages that support macro facilities. In experiments on CHO-K1 cells, tail length was found to increase linearly with dose, but plateaued at higher doses. Comet moment also increased linearly with dose, but over a larger dose range than tail length and had no tendency to plateau.
On the definition of absorbed dose
NASA Astrophysics Data System (ADS)
Grusell, Erik
2015-02-01
Purpose: The quantity absorbed dose is used extensively in all areas concerning the interaction of ionizing radiation with biological organisms, as well as with matter in general. The most recent and authoritative definition of absorbed dose is given by the International Commission on Radiation Units and Measurements (ICRU) in ICRU Report 85. However, that definition is incomplete. The purpose of the present work is to give a rigorous definition of absorbed dose. Methods: Absorbed dose is defined in terms of the random variable specific energy imparted. A random variable is a mathematical function, and it cannot be defined without specifying its domain of definition which is a probability space. This is not done in report 85 by the ICRU, mentioned above. Results: In the present work a definition of a suitable probability space is given, so that a rigorous definition of absorbed dose is possible. This necessarily includes the specification of the experiment which the probability space describes. In this case this is an irradiation, which is specified by the initial particles released and by the material objects which can interact with the radiation. Some consequences are discussed. Specific energy imparted is defined for a volume, and the definition of absorbed dose as a point function involves the specific energy imparted for a small mass contained in a volume surrounding the point. A possible more precise definition of this volume is suggested and discussed. Conclusions: The importance of absorbed dose motivates a proper definition, and one is given in the present work. No rigorous definition has been presented before.
Properties of field functionals and characterization of local functionals
NASA Astrophysics Data System (ADS)
Brouder, Christian; Dang, Nguyen Viet; Laurent-Gengoux, Camille; Rejzner, Kasia
2018-02-01
Functionals (i.e., functions of functions) are widely used in quantum field theory and solid-state physics. In this paper, functionals are given a rigorous mathematical framework and their main properties are described. The choice of the proper space of test functions (smooth functions) and of the relevant concept of differential (Bastiani differential) are discussed. The relation between the multiple derivatives of a functional and the corresponding distributions is described in detail. It is proved that, in a neighborhood of every test function, the support of a smooth functional is uniformly compactly supported and the order of the corresponding distribution is uniformly bounded. Relying on a recent work by Dabrowski, several spaces of functionals are furnished with a complete and nuclear topology. In view of physical applications, it is shown that most formal manipulations can be given a rigorous meaning. A new concept of local functionals is proposed and two characterizations of them are given: the first one uses the additivity (or Hammerstein) property, the second one is a variant of Peetre's theorem. Finally, the first step of a cohomological approach to quantum field theory is carried out by proving a global Poincaré lemma and defining multi-vector fields and graded functionals within our framework.
NASA Astrophysics Data System (ADS)
Bagayoko, Diola
In 2014, 50 years following the introduction of density functional theory (DFT), a rigorous understanding of it was published [AIP Advances, 4, 127104 (2014)]. This understanding included necessary steps ab initio electronic structure calculations have to take if their results are to possess the full physical content of DFT. These steps guarantee the fulfillment of conditions of validity of DFT; not surprisingly, they have led to accurate descriptions of several dozens of semiconductors, from first principle, without invoking derivative discontinuity or self-interaction correction. This presentation shows the mathematically and physically rigorous understanding of the relativistic extension of DFT by Rajagopal and Callaway {Phys. Rev. B 7, 1912 (1973)]. As in the non-relativistic case, the attainment of the absolute minima of the occupied energies is a necessary condition for the corresponding current density to be that of the ground state of the system and for computational results to agree with corresponding, experimental ones. Acknowledgments:This work was funded in part by the US National Science Foundation [NSF, Award Nos. EPS-1003897, NSF (2010-2015)-RII-SUBR, and HRD-1002541], the US Department of Energy, National Nuclear Security Administration (NNSA, Award No. DE-NA0002630), LaSPACE, and LONI-SUBR.
Improving the ideal and human observer consistency: a demonstration of principles
NASA Astrophysics Data System (ADS)
He, Xin
2017-03-01
In addition to being rigorous and realistic, the usefulness of the ideal observer computational tools may also depend on whether they serve the empirical purpose for which they are created, e.g. to identify desirable imaging systems to be used by human observers. In SPIE 10136-35, I have shown that the ideal and the human observers do not necessarily prefer the same system as the optimal or better one due to their different objectives in both hardware and software optimization. In this work, I attempt to identify a necessary but insufficient condition under which the human and the ideal observer may rank systems consistently. If corroborated, such a condition allows a numerical test on the ideal/human consistency without routine human observer studies. I reproduced data from Abbey et al. JOSA 2001 to verify the proposed condition (i.e., not a rigorous falsification study due to the lack of specificity in the proposed conjecture. A roadmap for more falsifiable conditions is proposed). Via this work, I would like to emphasize the reality of practical decision making in addition to the realism in mathematical modeling. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA.)
Nonperturbative Time Dependent Solution of a Simple Ionization Model
NASA Astrophysics Data System (ADS)
Costin, Ovidiu; Costin, Rodica D.; Lebowitz, Joel L.
2018-02-01
We present a non-perturbative solution of the Schrödinger equation {iψ_t(t,x)=-ψ_{xx}(t,x)-2(1 +α sinω t) δ(x)ψ(t,x)} , written in units in which \\hbar=2m=1, describing the ionization of a model atom by a parametric oscillating potential. This model has been studied extensively by many authors, including us. It has surprisingly many features in common with those observed in the ionization of real atoms and emission by solids, subjected to microwave or laser radiation. Here we use new mathematical methods to go beyond previous investigations and to provide a complete and rigorous analysis of this system. We obtain the Borel-resummed transseries (multi-instanton expansion) valid for all values of α, ω, t for the wave function, ionization probability, and energy distribution of the emitted electrons, the latter not studied previously for this model. We show that for large t and small α the energy distribution has sharp peaks at energies which are multiples of ω, corresponding to photon capture. We obtain small α expansions that converge for all t, unlike those of standard perturbation theory. We expect that our analysis will serve as a basis for treating more realistic systems revealing a form of universality in different emission processes.
Dynamics of tissue topology during cancer invasion and metastasis
NASA Astrophysics Data System (ADS)
Munn, Lance L.
2013-12-01
During tumor progression, cancer cells mix with other cell populations including epithelial and endothelial cells. Although potentially important clinically as well as for our understanding of basic tumor biology, the process of mixing is largely a mystery. Furthermore, there is no rigorous, analytical measure available for quantifying the mixing of compartments within a tumor. I present here a mathematical model of tissue repair and tumor growth based on collective cell migration that simulates a wide range of observed tumor behaviors with correct tissue compartmentalization and connectivity. The resulting dynamics are analyzed in light of the Euler characteristic number (χ), which describes key topological features such as fragmentation, looping and cavities. The analysis predicts a number of regimes in which the cancer cells can encapsulate normal tissue, form a co-interdigitating mass, or become fragmented and encapsulated by endothelial or epithelial structures. Key processes that affect the topological changes are the production of provisional matrix in the tumor, and the migration of endothelial or epithelial cells on this matrix. Furthermore, the simulations predict that topological changes during tumor invasion into blood vessels may contribute to metastasis. The topological analysis outlined here could be useful for tumor diagnosis or monitoring response to therapy and would only require high resolution, 3D image data to resolve and track the various cell compartments.
NASA Astrophysics Data System (ADS)
Cocco, Alex P.; Nakajo, Arata; Chiu, Wilson K. S.
2017-12-01
We present a fully analytical, heuristic model - the "Analytical Transport Network Model" - for steady-state, diffusive, potential flow through a 3-D network. Employing a combination of graph theory, linear algebra, and geometry, the model explicitly relates a microstructural network's topology and the morphology of its channels to an effective material transport coefficient (a general term meant to encompass, e.g., conductivity or diffusion coefficient). The model's transport coefficient predictions agree well with those from electrochemical fin (ECF) theory and finite element analysis (FEA), but are computed 0.5-1.5 and 5-6 orders of magnitude faster, respectively. In addition, the theory explicitly relates a number of morphological and topological parameters directly to the transport coefficient, whereby the distributions that characterize the structure are readily available for further analysis. Furthermore, ATN's explicit development provides insight into the nature of the tortuosity factor and offers the potential to apply theory from network science and to consider the optimization of a network's effective resistance in a mathematically rigorous manner. The ATN model's speed and relative ease-of-use offer the potential to aid in accelerating the design (with respect to transport), and thus reducing the cost, of energy materials.
Towards rigorous analysis of the Levitov-Mirlin-Evers recursion
NASA Astrophysics Data System (ADS)
Fyodorov, Y. V.; Kupiainen, A.; Webb, C.
2016-12-01
This paper aims to develop a rigorous asymptotic analysis of an approximate renormalization group recursion for inverse participation ratios P q of critical powerlaw random band matrices. The recursion goes back to the work by Mirlin and Evers (2000 Phys. Rev. B 62 7920) and earlier works by Levitov (1990 Phys. Rev. Lett. 64 547, 1999 Ann. Phys. 8 697-706) and is aimed to describe the ensuing multifractality of the eigenvectors of such matrices. We point out both similarities and dissimilarities between the LME recursion and those appearing in the theory of multiplicative cascades and branching random walks and show that the methods developed in those fields can be adapted to the present case. In particular the LME recursion is shown to exhibit a phase transition, which we expect is a freezing transition, where the role of temperature is played by the exponent q. However, the LME recursion has features that make its rigorous analysis considerably harder and we point out several open problems for further study.
Indirect Lightning Safety Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, M M; Perkins, M P; Brown, C G
2009-04-24
Lightning is a safety hazard for high-explosives (HE) and their detonators. In the However, the current flowing from the strike point through the rebar of the building The methodology for estimating the risk from indirect lighting effects will be presented. It has two parts: a method to determine the likelihood of a detonation given a lightning strike, and an approach for estimating the likelihood of a strike. The results of these two parts produce an overall probability of a detonation. The probability calculations are complex for five reasons: (1) lightning strikes are stochastic and relatively rare, (2) the quality ofmore » the Faraday cage varies from one facility to the next, (3) RF coupling is inherently a complex subject, (4) performance data for abnormally stressed detonators is scarce, and (5) the arc plasma physics is not well understood. Therefore, a rigorous mathematical analysis would be too complex. Instead, our methodology takes a more practical approach combining rigorous mathematical calculations where possible with empirical data when necessary. Where there is uncertainty, we compensate with conservative approximations. The goal is to determine a conservative estimate of the odds of a detonation. In Section 2, the methodology will be explained. This report will discuss topics at a high-level. The reasons for selecting an approach will be justified. For those interested in technical details, references will be provided. In Section 3, a simple hypothetical example will be given to reinforce the concepts. While the methodology will touch on all the items shown in Figure 1, the focus of this report is the indirect effect, i.e., determining the odds of a detonation from given EM fields. Professor Martin Uman from the University of Florida has been characterizing and defining extreme lightning strikes. Using Professor Uman's research, Dr. Kimball Merewether at Sandia National Laboratory in Albuquerque calculated the EM fields inside a Faraday-cage type facility, when the facility is struck by lightning. In the following examples we will use Dr. Merewether's calculations from a poor quality Faraday cage as the input for the RF coupling analysis. coupling of radio frequency (RF) energy to explosive components is an indirect effect of currents [1]. If HE is adequately separated from the walls of the facility that is struck by disassembled have been turned into Faraday-cage structures to protect against lightning is initiation of the HE. last couple of decades, DOE facilities where HE is manufactured, assembled, stored or lightning. The most sensitive component is typically a detonator, and the safety concern lightning, electrons discharged from the clouds should not reach the HE components. radio receiver, the metal cable of a detonator can extract energy from the EM fields. This to the earth will create electromagnetic (EM) fields in the facility. Like an antenna in a« less
ERIC Educational Resources Information Center
Peng, Peng; Namkung, Jessica; Barnes, Marcia; Sun, Congying
2016-01-01
The purpose of this meta-analysis was to determine the relation between mathematics and working memory (WM) and to identify possible moderators of this relation including domains of WM, types of mathematics skills, and sample type. A meta-analysis of 110 studies with 829 effect sizes found a significant medium correlation of mathematics and WM, r…
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Mathematical Creativity and Mathematical Aptitude: A Cross-Lagged Panel Analysis
ERIC Educational Resources Information Center
Tyagi, Tarun Kumar
2016-01-01
Cross-lagged panel correlation (CLPC) analysis has been used to identify causal relationships between mathematical creativity and mathematical aptitude. For this study, 480 8th standard students were selected through a random cluster technique from 9 intermediate and high schools of Varanasi, India. Mathematical creativity and mathematical…
Property-Based Software Engineering Measurement
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.
1997-01-01
Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.
Designing Studies to Test Causal Questions About Early Math: The Development of Making Pre-K Count.
Mattera, Shira K; Morris, Pamela A; Jacob, Robin; Maier, Michelle; Rojas, Natalia
2017-01-01
A growing literature has demonstrated that early math skills are associated with later outcomes for children. This research has generated interest in improving children's early math competencies as a pathway to improved outcomes for children in elementary school. The Making Pre-K Count study was designed to test the effects of an early math intervention for preschoolers. Its design was unique in that, in addition to causally testing the effects of early math skills, it also allowed for the examination of a number of additional questions about scale-up, the influence of contextual factors and the counterfactual environment, the mechanism of long-term fade-out, and the role of measurement in early childhood intervention findings. This chapter outlines some of the design considerations and decisions put in place to create a rigorous test of the causal effects of early math skills that is also able to answer these questions in early childhood mathematics and intervention. The study serves as a potential model for how to advance science in the fields of preschool intervention and early mathematics. © 2017 Elsevier Inc. All rights reserved.
Maximizing kinetic energy transfer in one-dimensional many-body collisions
NASA Astrophysics Data System (ADS)
Ricardo, Bernard; Lee, Paul
2015-03-01
The main problem discussed in this paper involves a simple one-dimensional two-body collision, in which the problem can be extended into a chain of one-dimensional many-body collisions. The result is quite interesting, as it provides us with a thorough mathematical understanding that will help in designing a chain system for maximum energy transfer for a range of collision types. In this paper, we will show that there is a way to improve the kinetic energy transfer between two masses, and the idea can be applied recursively. However, this method only works for a certain range of collision types, which is indicated by a range of coefficients of restitution. Although the concept of momentum, elastic and inelastic collision, as well as Newton’s laws, are taught in junior college physics, especially in Singapore schools, students in this level are not expected to be able to do this problem quantitatively, as it requires rigorous mathematics, including calculus. Nevertheless, this paper provides nice analytical steps that address some common misconceptions in students’ way of thinking about one-dimensional collisions.
Overview of Aro Program on Network Science for Human Decision Making
NASA Astrophysics Data System (ADS)
West, Bruce J.
This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.
NASA Astrophysics Data System (ADS)
Danon, Leon; Brooks-Pollock, Ellen
2016-09-01
In their review, Chowell et al. consider the ability of mathematical models to predict early epidemic growth [1]. In particular, they question the central prediction of classical differential equation models that the number of cases grows exponentially during the early stages of an epidemic. Using examples including HIV and Ebola, they argue that classical models fail to capture key qualitative features of early growth and describe a selection of models that do capture non-exponential epidemic growth. An implication of this failure is that predictions may be inaccurate and unusable, highlighting the need for care when embarking upon modelling using classical methodology. There remains a lack of understanding of the mechanisms driving many observed epidemic patterns; we argue that data science should form a fundamental component of epidemic modelling, providing a rigorous methodology for data-driven approaches, rather than trying to enforce established frameworks. The need for refinement of classical models provides a strong argument for the use of data science, to identify qualitative characteristics and pinpoint the mechanisms responsible for the observed epidemic patterns.
A contribution to calculation of the mathematical pendulum
NASA Astrophysics Data System (ADS)
Anakhaev, K. N.
2014-11-01
In this work, as a continuation of rigorous solutions of the mathematical pendulum theory, calculated dependences were obtained in elementary functions (with construction of plots) for a complete description of the oscillatory motion of the pendulum with determination of its parameters, such as the oscillation period, deviation angles, time of motion, angular velocity and acceleration, and strains in the pendulum rod (maximum, minimum, zero, and gravitational). The results of calculations according to the proposed dependences closely (≪1%) coincide with the exact tabulated data for individual points. The conditions of ascending at which the angular velocity, angular acceleration, and strains in the pendulum rod reach their limiting values equal to and 5 m 1 g, respectively, are shown. It was revealed that the angular acceleration does not depend on the pendulum oscillation amplitude; the pendulum rod strain equal to the gravitation force of the pendulum R s = m 1 g at the time instant is also independent on the amplitude. The dependences presented in this work can also be invoked for describing oscillations of a physical pendulum, mass on a spring, electric circuit, etc.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1997-01-01
Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.
1977-01-01
The usefulness of vee-trough concentrators in improving the efficiency and reducing the cost of collectors assembled from evacuated tube receivers was studied in the vee-trough/vacuum tube collector (VTVTC) project. The VTVTC was analyzed rigorously and various mathematical models were developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized FEP Teflon. Tests were run at temperatures ranging from 95 to 180 C. Vee-trough collector efficiencies of 35 to 40% were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Predicted daily useful heat collection and efficiency values are presented for a year's duration of operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with a complete economic evaluation.
On Mathematical Modeling Of Quantum Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achuthan, P.; Dept. of Mathematics, Indian Institute of Technology, Madras, 600 036; Narayanankutty, Karuppath
2009-07-02
The world of physical systems at the most fundamental levels is replete with efficient, interesting models possessing sufficient ability to represent the reality to a considerable extent. So far, quantum mechanics (QM) forming the basis of almost all natural phenomena, has found beyond doubt its intrinsic ingenuity, capacity and robustness to stand the rigorous tests of validity from and through appropriate calculations and experiments. No serious failures of quantum mechanical predictions have been reported, yet. However, Albert Einstein, the greatest theoretical physicist of the twentieth century and some other eminent men of science have stated firmly and categorically that QM,more » though successful by and large, is incomplete. There are classical and quantum reality models including those based on consciousness. Relativistic quantum theoretical approaches to clearly understand the ultimate nature of matter as well as radiation have still much to accomplish in order to qualify for a final theory of everything (TOE). Mathematical models of better, suitable character as also strength are needed to achieve satisfactory explanation of natural processes and phenomena. We, in this paper, discuss some of these matters with certain apt illustrations as well.« less
ERIC Educational Resources Information Center
Baki, Mujgan
2015-01-01
This study aims to explore the role of lesson analysis in the development of mathematical knowledge for teaching. For this purpose, a graduate course based on lesson analysis was designed for novice mathematics teachers. Throughout the course the teachers watched videos of group-mates and discussed the issues they identified in terms of…
Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.
Morse, Janice M
2015-09-01
Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation. © The Author(s) 2015.
Threshold for extinction and survival in stochastic tumor immune system
NASA Astrophysics Data System (ADS)
Li, Dongxi; Cheng, Fangjuan
2017-10-01
This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.
Production of Entanglement Entropy by Decoherence
NASA Astrophysics Data System (ADS)
Merkli, M.; Berman, G. P.; Sayre, R. T.; Wang, X.; Nesterov, A. I.
We examine the dynamics of entanglement entropy of all parts in an open system consisting of a two-level dimer interacting with an environment of oscillators. The dimer-environment interaction is almost energy conserving. We find the precise link between decoherence and production of entanglement entropy. We show that not all environment oscillators carry significant entanglement entropy and we identify the oscillator frequency regions which contribute to the production of entanglement entropy. For energy conserving dimer-environment interactions the models are explicitly solvable and our results hold for all dimer-environment coupling strengths. We carry out a mathematically rigorous perturbation theory around the energy conserving situation in the presence of small non-energy conserving interactions.
On the regularization of impact without collision: the Painlevé paradox and compliance
NASA Astrophysics Data System (ADS)
Hogan, S. J.; Kristiansen, K. Uldall
2017-06-01
We consider the problem of a rigid body, subject to a unilateral constraint, in the presence of Coulomb friction. We regularize the problem by assuming compliance (with both stiffness and damping) at the point of contact, for a general class of normal reaction forces. Using a rigorous mathematical approach, we recover impact without collision (IWC) in both the inconsistent and the indeterminate Painlevé paradoxes, in the latter case giving an exact formula for conditions that separate IWC and lift-off. We solve the problem for arbitrary values of the compliance damping and give explicit asymptotic expressions in the limiting cases of small and large damping, all for a large class of rigid bodies.
NASA Astrophysics Data System (ADS)
2014-04-01
2014 International Conference on Science & Engineering in Mathematics, Chemistry and Physics (ScieTech 2014), was held at the Media Hotel, Jakarta, Indonesia, on 13-14 January 2014. The ScieTech 2014 conference is aimed to bring together researchers, engineers and scientists in the domain of interest from around the world. ScieTech 2014 is placed on promoting interaction between the theoretical, experimental, and applied communities, so that a high level exchange is achieved in new and emerging areas within Mathematics, Chemistry and Physics. We would like to express our sincere gratitude to all in the Technical Program Committee who have reviewed the papers and developed a very interesting Conference Program as well as the invited and plenary speakers. This year, we received 187 papers and after rigorous review, 50 papers were accepted. The participants come from 16 countries. There are 5 (Five) Paralell Sessions and Four Keynote Speakers. It is an honour to present this volume of Journal of Physics: Conference Series (JPCS) and we deeply thank the authors for their enthusiastic and high-grade contributions. Finally, we would like to thank the conference chairmen, the members of the steering committee, the organizing committee, the organizing secretariat and the financial support from the conference sponsors that allowed the success of ScieTech 2014. The Editors of the Scietech 2014 Proceedings: Dr. Ford Lumban Gaol Dr. Benfano Soewito Dr. P.N. Gajjar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimus, Paul William
This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examplesmore » of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.« less
ERIC Educational Resources Information Center
Nunez, Rafael E.
This paper gives a brief introduction to a discipline called the cognitive science of mathematics. The theoretical background of the arguments is based on embodied cognition and findings in cognitive linguistics. It discusses Mathematical Idea Analysis, a set of techniques for studying implicit structures in mathematics. Particular attention is…
Mathematical modeling in realistic mathematics education
NASA Astrophysics Data System (ADS)
Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo
2017-12-01
The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.
Impact of Using History of Mathematics on Students' Mathematics Attitude: A Meta-Analysis Study
ERIC Educational Resources Information Center
Bütüner, Suphi Onder
2015-01-01
The main objective of hereby study is to unearth the big picture, reaching studies about influence of using history of mathematics on attitude of mathematics among students. 6 studies with a total effect size of 14 that comply with coding protocol and comprise statistical values necessary for meta-analysis are combined via meta-analysis method…
NASA Astrophysics Data System (ADS)
Warsito; Darhim; Herman, T.
2018-01-01
This study aims to determine the differences in the improving of mathematical representation ability based on progressive mathematization with realistic mathematics education (PMR-MP) with conventional learning approach (PB). The method of research is quasi-experiments with non-equivalent control group designs. The study population is all students of class VIII SMPN 2 Tangerang consisting of 6 classes, while the sample was taken two classes with purposive sampling technique. The experimental class is treated with PMR-MP while the control class is treated with PB. The instruments used are test of mathematical representation ability. Data analysis was done by t-test, ANOVA test, post hoc test, and descriptive analysis. The result of analysis can be concluded that: 1) there are differences of mathematical representation ability improvement between students treated by PMR-MP and PB, 2) no interaction between learning approach (PMR-MP, PB) and prior mathematics knowledge (PAM) to improve students’ mathematical representation; 3) Students’ mathematical representation improvement in the level of higher PAM is better than medium, and low PAM students. Thus, based on the process of mathematization, it is very important when the learning direction of PMR-MP emphasizes on the process of building mathematics through a mathematical model.
A Multifaceted Mathematical Approach for Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, F.; Anitescu, M.; Bell, J.
2012-03-07
Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significantmore » impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.« less
ERIC Educational Resources Information Center
Conn, Katharine
2014-01-01
In the last three decades, there has been a large increase in the number of rigorous experimental and quasi-experimental evaluations of education programs in developing countries. These impact evaluations have taken place all over the globe, including a large number in Sub-Saharan Africa (SSA). The fact that the developing world is socially and…
NASA Technical Reports Server (NTRS)
Cole, Bjorn; Chung, Seung H.
2012-01-01
One of the challenges of systems engineering is in working multidisciplinary problems in a cohesive manner. When planning analysis of these problems, system engineers must tradeoff time and cost for analysis quality and quantity. The quality is associated with the fidelity of the multidisciplinary models and the quantity is associated with the design space that can be analyzed. The tradeoff is due to the resource intensive process of creating a cohesive multidisciplinary system model and analysis. Furthermore, reuse or extension of the models used in one stage of a product life cycle for another is a major challenge. Recent developments have enabled a much less resource-intensive and more rigorous approach than handwritten translation scripts or codes of multidisciplinary models and their analyses. The key is to work from a core system model defined in a MOF-based language such as SysML and in leveraging the emerging tool ecosystem, such as Query-View- Transform (QVT), from the OMG community. SysML was designed to model multidisciplinary systems and analyses. The QVT standard was designed to transform SysML models. The Europa Hability Mission (EHM) team has begun to exploit these capabilities. In one case, a Matlab/Simulink model is generated on the fly from a system description for power analysis written in SysML. In a more general case, a symbolic mathematical framework (supported by Wolfram Mathematica) is coordinated by data objects transformed from the system model, enabling extremely flexible and powerful tradespace exploration and analytical investigations of expected system performance.
Space radiator simulation manual for computer code
NASA Technical Reports Server (NTRS)
Black, W. Z.; Wulff, W.
1972-01-01
A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.
Optimal design and evaluation of a color separation grating using rigorous coupled wave analysis
NASA Astrophysics Data System (ADS)
Nagayoshi, Mayumi; Oka, Keiko; Klaus, Werner; Komai, Yuki; Kodate, Kashiko
2006-02-01
In recent years, the technology which separates white light into the three primary colors of Red (R), Green (G) and Blue (B) and adjusts each optical intensity and composites R, G and B to display various colors is required in the development and spread of color visual equipments. Various color separation devices have been proposed and have been put to practical use in color visual equipments. We have focused on a small and light grating-type device which has the possibility of reduction in cost and large-scale production and generates only the three primary colors of R, G and B so that a high saturation level can be obtained. To perform a rigorous analysis and design of color separation gratings, our group has developed a program that is based on the Rigorous Coupled Wave Analysis (RCWA). We then calculated the parameters to obtain a diffraction efficiency of higher than 70% and the color gamut of about 70%. We will report on the design, fabrication and evaluation of color separation gratings that have been optimized for fabrication by laser drawing.
Interacting partially directed self-avoiding walk: a probabilistic perspective
NASA Astrophysics Data System (ADS)
Carmona, Philippe; Nguyen, Gia Bao; Pétrélis, Nicolas; Torri, Niccolò
2018-04-01
We review some recent results obtained in the framework of the 2D interacting self-avoiding walk (ISAW). After a brief presentation of the rigorous results that have been obtained so far for ISAW we focus on the interacting partially directed self-avoiding walk (IPDSAW), a model introduced in Zwanzig and Lauritzen (1968 J. Chem. Phys. 48 3351) to decrease the mathematical complexity of ISAW. In the first part of the paper, we discuss how a new probabilistic approach based on a random walk representation (see Nguyen and Pétrélis (2013 J. Stat. Phys. 151 1099–120)) allowed for a sharp determination of the asymptotics of the free energy close to criticality (see Carmona et al (2016 Ann. Probab. 44 3234–90)). Some scaling limits of IPDSAW were conjectured in the physics literature (see e.g. Brak et al (1993 Phys. Rev. E 48 2386–96)). We discuss here the fact that all limits are now proven rigorously, i.e. for the extended regime in Carmona and Pétrélis (2016 Electron. J. Probab. 21 1–52), for the collapsed regime in Carmona et al (2016 Ann. Probab. 44 3234–90) and at criticality in Carmona and Pétrélis (2017b arxiv:1709.06448). The second part of the paper starts with the description of four open questions related to physically relevant extensions of IPDSAW. Among such extensions is the interacting prudent self-avoiding walk (IPSAW) whose configurations are those of the 2D prudent walk. We discuss the main results obtained in Pétrélis and Torri (2016 Ann. Inst. Henri Poincaré D) about IPSAW and in particular the fact that its collapse transition is proven to exist rigorously.
Hazardous Asteroids: Cloaking STEM Skills Training within an Attention-Grabbing Science/Math Course
NASA Astrophysics Data System (ADS)
Ryan, Eileen V.; Ryan, William H.
2015-11-01
A graduate-level course was designed and taught during the summer months from 2009 - 2015 in order to contribute to the training and professional development of K-12 teachers residing in the Southwest. The teachers were seeking Master’s degrees via the New Mexico Institute of Mining and Technology’s (NMT’s) Masters of Science Teaching (MST) program, and the course satisfied a science or math requirement. The MST program provides opportunities for in-service teachers to enhance their content backgrounds in science, mathematics, engineering, and technology (SMET). The ultimate goal is to assist teachers in gaining knowledge that has direct application in the classroom.The engaging topic area of near-Earth object (NEO) characterization studies was used to create a fun and exciting framework for mastering basic skills and concepts in physics and astronomy. The objective was to offer a class that had the appropriate science rigor (with an emphasis on mathematics) within a non-threatening format. The course, entitled “Hazardous Asteroids”, incorporates a basic planetary physics curriculum, with challenging laboratories that include a heavy emphasis on math and technology. Since the authors run a NASA-funded NEO research and follow-up program, also folded into the course is the use of the Magdalena Ridge Observatory’s 2.4-meter telescope so participants can take and reduce their own data on a near-Earth asteroid.In exit assessments, the participants have given the course excellent ratings for design and implementation, and the overall degree of satisfaction was high. This validates that a well-constructed (and rigorous) course can be effective in receptively reaching teachers in need of basic skills refreshment. Many of the teachers taking the course were employed in school districts serving at-risk or under-prepared students, and the course helped provide them with the confidence vital to developing new strategies for successful teaching.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
NASA Astrophysics Data System (ADS)
Wardono; Mariani, S.
2018-03-01
Indonesia as a developing country in the future will have high competitiveness if its students have high mathematics literacy ability. The current reality from year to year rankings of PISA mathematics literacy Indonesian students are still not good. This research is motivated by the importance and low ability of the mathematics literacy. The purpose of this study is to: (1) analyze the effectiveness of PMRI learning with media Schoology, (2) describe the ability of students' mathematics literacy on PMRI learning with media Schoology which is reviewed based on seven components of mathematics literacy, namely communication, mathematizing, representation, reasoning, devising strategies, using symbols, and using mathematics tool. The method used in this research is the method of sequential design method mix. Techniques of data collection using observation, interviews, tests, and documentation. Data analysis techniques use proportion test, appellate test, and use descriptive analysis. Based on the data analysis, it can be concluded; (1) PMRI learning with media Schoology effectively improve the ability of mathematics literacy because of the achievement of classical completeness, students' mathematics literacy ability in PMRI learning with media Schoology is higher than expository learning, and there is increasing ability of mathematics literacy in PMRI learning with media Schoology of 30%. (2) Highly capable students attain excellent mathematics literacy skills, can work using broad thinking with appropriate resolution strategies. Students who are capable of achieving good mathematics literacy skills can summarize information, present problem-solving processes, and interpret solutions. low-ability students have reached the level of ability of mathematics literacy good enough that can solve the problem in a simple way.
Mapping Mathematics in Classroom Discourse
ERIC Educational Resources Information Center
Herbel-Eisenmann, Beth A.; Otten, Samuel
2011-01-01
This article offers a particular analytic method from systemic functional linguistics, "thematic analysis," which reveals the mathematical meaning potentials construed in discourse. Addressing concerns that discourse analysis is too often content-free, thematic analysis provides a way to represent semantic structures of mathematical content,…
New Trends in Mathematics Teaching, Volume III.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Paris (France).
Each of the ten chapters in this volume is intended to present an objective analysis of the trends of some important subtopic in mathematics education and each includes a bibliography for fuller study. The chapters cover primary school mathematics, algebra, geometry, probability and statistics, analysis, logic, applications of mathematics, methods…
Financial Mathematical Tasks in a Middle School Mathematics Textbook Series: A Content Analysis
ERIC Educational Resources Information Center
Hamburg, Maryanna P.
2009-01-01
This content analysis examined the distribution of financial mathematical tasks (FMTs), mathematical tasks that contain financial terminology and require financially related solutions, across the National Standards in K-12 Personal Finance Education categories (JumpStart Coalition, 2007), the thinking skills as identified by "A Taxonomy for…
ERIC Educational Resources Information Center
Pierce, Robyn; Stacey, Kaye; Wander, Roger; Ball, Lynda
2011-01-01
Current technologies incorporating sophisticated mathematical analysis software (calculation, graphing, dynamic geometry, tables, and more) provide easy access to multiple representations of mathematical problems. Realising the affordances of such technology for students' learning requires carefully designed lessons. This paper reports on design…
Trend Analysis on Mathematics Achievements: A Comparative Study Using TIMSS Data
ERIC Educational Resources Information Center
Ker, H. W.
2013-01-01
Research addressed the importance of mathematics education for the students' preparation to enter scientific and technological workforce. This paper utilized Trends in International Mathematics and Science Study (TIMSS) 2011 data to conduct a global comparative analysis on mathematics performance at varied International Benchmark levels. The…
Applied Mathematics for agronomical engineers in Spain at UPM
NASA Astrophysics Data System (ADS)
Anton, J. M.; Grau, J. B.; Tarquis, A. M.; Fabregat, J.; Sanchez, M. E.
2009-04-01
Mathematics, created or discovered, are a global human conceptual endowment, containing large systems of knowledge, and varied skills to use definite parts of them, in creation or discovery, or for applications, e.g. in Physics, or notably in engineering behaviour. When getting upper intellectual levels in the 19th century, the agronomical science and praxis was noticeably or mainly organised in Spain in agronomical engineering schools and also in institutes, together with technician schools, also with different lower lever centres, and they have evolved with progress and they are much changing at present to a EEES schema (Bolonia process). They work in different lines that need some basis or skills from mathematics. The vocation to start such careers, that have varied curriculums, contains only some mathematics, and the number of credits for mathematics is restrained because time is necessary for other initial sciences such as applied chemistry, biology, ecology and soil sciences, but some basis and skill of maths are needed, also with Physics, at least for electricity, machines, construction, economics at initial ground levels, and also for Statistics that are here considered part of Applied Mathematics. The ways of teaching mathematical basis and skills are especial, and are different from the practical ways needed e. g. for Soil Sciences, and they involve especial efforts from students, and especial controls or exams that guide much learning. The mathematics have a very large accepted content that uses mostly a standard logic, and that is remarkably stable and international, rather similar notation and expressions being used with different main languages. For engineering the logical basis is really often not taught, but the use of it is transferred, especially for calculus that requires both adapted somehow simplified schemas and the learning of a specific skill to use it, and also for linear algebra. The basic forms of differential calculus in several variables are an example, maybe since Leibnitz, of the difficulty of balance rigor and usefulness in limited hours of teaching. In part engineers use of mathematics with manuals and now with computers that use packages, general (MAPLE, MATLAB, may be MATHCAD, et. C. ) or specific, such as for Statistics, Topography, Structural design, Hydraulics, specific Machines,…, and mostly the details of the algorithms are hidden, but the engineer must have in mind the basic mathematical schemas justifying what he is constructing with these tools, the PC being also used for organisation and drawing. The engineers must adapt to the evolution of these packages and computers that get much changed and improved in five or ten years, quicker than the specific engineering environment, and a clear idea of the much more stable mathematical structures behind gives a solid mental ground for that. An initiation to using computers also with a mathematical structure behind is necessary, to be followed in professional life. A specific actualisation of mathematical knowledge is often necessary for some new applications.
Ratio Analysis: Where Investments Meet Mathematics.
ERIC Educational Resources Information Center
Barton, Susan D.; Woodbury, Denise
2002-01-01
Discusses ratio analysis by which investments may be evaluated. Requires the use of fundamental mathematics, problem solving, and a comparison of the mathematical results within the framework of industry. (Author/NB)
Rigorous Electromagnetic Analysis of the Focusing Action of Refractive Cylindrical Microlens
NASA Astrophysics Data System (ADS)
Liu, Juan; Gu, Ben-Yuan; Dong, Bi-Zhen; Yang, Guo-Zhen
The focusing action of refractive cylindrical microlens is investigated based on the rigorous electromagnetic theory with the use of the boundary element method. The focusing behaviors of these refractive microlenses with continuous and multilevel surface-envelope are characterized in terms of total electric-field patterns, the electric-field intensity distributions on the focal plane, and their diffractive efficiencies at the focal spots. The obtained results are also compared with the ones obtained by Kirchhoff's scalar diffraction theory. The present numerical and graphical results may provide useful information for the analysis and design of refractive elements in micro-optics.
The Mathematics of the Return from Home Ownership.
ERIC Educational Resources Information Center
Vest, Floyd; Griffith, Reynolds
1991-01-01
A mathematical model or project analysis that calculates the financial return from home ownership is described. This analysis illustrates topics such as compound interest, annuities, amortization schedules, internal rate of return, and other elements of school and college mathematics up through numerical analysis. (KR)
Teaching where there are no schools.
Helwig, J F; Friend, J
1985-01-01
An experimental project designed to investigate the feasibility of using radio as a medium of instruction in the teaching of elementary school mathematics, the Radio Mathematics Project, which was located in Nicaragua from mid-1974 to early 1979, developed mathematics lessons for the first 4 years of elementary school. These lessons -- daily radio broadcasts plus postbroadcast activities conducted by the classroom teachers -- proved to be successful in improving the students' mathematics achievement. The cost of widescale implementation of the materials was estimated to be well within Nicaragua's budget. The success of the project can be attributed largely to the innovative style of the broadcast lessons, a style characterized as "interactive" in recognition of its mimicry of a conversation between students and teacher. The interactive lesson style is easily adapted to the teaching of many other subjects and has been used, with minor modifications, to teach English as a 2nd language and initial reading. In all these settings, the lessons provide daily instruction and are intended to replace rather than supplement existing instruction in the subject matter. Each lesson consists of a broadcast portion of the lesson carries the major burden of instruction. The interactive radio lessons are designed to provide direct instruction to the students. The radio teachers explain concepts, provide examples, and guide the students in the completion of exercises. The students listening to the radio lessons are expected to participate actively. After every student response, the radio gives the correct response so that the children can immediately compare their own responses with the correct one. Segmented structure is characteristic of the radio lessons used in all 3 projects mentioned. Radio Math's lessons are reinforced by rigorous research to validate their teaching effectiveness.
Quantitative Analysis of the Interdisciplinarity of Applied Mathematics.
Xie, Zheng; Duan, Xiaojun; Ouyang, Zhenzheng; Zhang, Pengyuan
2015-01-01
The increasing use of mathematical techniques in scientific research leads to the interdisciplinarity of applied mathematics. This viewpoint is validated quantitatively here by statistical and network analysis on the corpus PNAS 1999-2013. A network describing the interdisciplinary relationships between disciplines in a panoramic view is built based on the corpus. Specific network indicators show the hub role of applied mathematics in interdisciplinary research. The statistical analysis on the corpus content finds that algorithms, a primary topic of applied mathematics, positively correlates, increasingly co-occurs, and has an equilibrium relationship in the long-run with certain typical research paradigms and methodologies. The finding can be understood as an intrinsic cause of the interdisciplinarity of applied mathematics.
Rigorous diffraction analysis using geometrical theory of diffraction for future mask technology
NASA Astrophysics Data System (ADS)
Chua, Gek S.; Tay, Cho J.; Quan, Chenggen; Lin, Qunying
2004-05-01
Advanced lithographic techniques such as phase shift masks (PSM) and optical proximity correction (OPC) result in a more complex mask design and technology. In contrast to the binary masks, which have only transparent and nontransparent regions, phase shift masks also take into consideration transparent features with a different optical thickness and a modified phase of the transmitted light. PSM are well-known to show prominent diffraction effects, which cannot be described by the assumption of an infinitely thin mask (Kirchhoff approach) that is used in many commercial photolithography simulators. A correct prediction of sidelobe printability, process windows and linearity of OPC masks require the application of rigorous diffraction theory. The problem of aerial image intensity imbalance through focus with alternating Phase Shift Masks (altPSMs) is performed and compared between a time-domain finite-difference (TDFD) algorithm (TEMPEST) and Geometrical theory of diffraction (GTD). Using GTD, with the solution to the canonical problems, we obtained a relationship between the edge on the mask and the disturbance in image space. The main interest is to develop useful formulations that can be readily applied to solve rigorous diffraction for future mask technology. Analysis of rigorous diffraction effects for altPSMs using GTD approach will be discussed.
FINITE DIFFERENCE THEORY, * LINEAR ALGEBRA , APPLIED MATHEMATICS, APPROXIMATION(MATHEMATICS), BOUNDARY VALUE PROBLEMS, COMPUTATIONS, HYPERBOLAS, MATHEMATICAL MODELS, NUMERICAL ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS, STABILITY.
ERIC Educational Resources Information Center
Jitendra, Asha K.; Lein, Amy E.; Im, Soo-hyun; Alghamdi, Ahmed A.; Hefte, Scott B.; Mouanoutoua, John
2018-01-01
This meta-analysis is the first to provide a quantitative synthesis of empirical evaluations of mathematical intervention programs implemented in secondary schools for students with learning disabilities and mathematics difficulties. Included studies used a treatment-control group design. A total of 19 experimental and quasi-experimental studies…
ERIC Educational Resources Information Center
Edge, D. Michael
2011-01-01
This non-experimental study attempted to determine how the different prescribed mathematic tracks offered at a comprehensive technical high school influenced the mathematics performance of low-achieving students on standardized assessments of mathematics achievement. The goal was to provide an analysis of any statistically significant differences…
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
Asymptotic Time Decay in Quantum Physics: a Selective Review and Some New Results
NASA Astrophysics Data System (ADS)
Marchetti, Domingos H. U.; Wreszinski, Walter F.
2013-05-01
Decay of various quantities (return or survival probability, correlation functions) in time are the basis of a multitude of important and interesting phenomena in quantum physics, ranging from spectral properties, resonances, return and approach to equilibrium, to dynamical stability properties and irreversibility and the "arrow of time" in [Asymptotic Time Decay in Quantum Physics (World Scientific, 2013)]. In this review, we study several types of decay — decay in the average, decay in the Lp-sense, and pointwise decay — of the Fourier-Stieltjes transform of a measure, usually identified with the spectral measure, which appear naturally in different mathematical and physical settings. In particular, decay in the Lp-sense is related both to pointwise decay and to decay in the average and, from a physical standpoint, relates to a rigorous form of the time-energy uncertainty relation. Both decay on the average and in the Lp-sense are related to spectral properties, in particular, absolute continuity of the spectral measure. The study of pointwise decay for singular continuous measures (Rajchman measures) provides a bridge between ergodic theory, number theory and analysis, including the method of stationary phase. The theory is illustrated by some new results in the theory of sparse models.
On the convergence of the coupled-wave approach for lamellar diffraction gratings
NASA Technical Reports Server (NTRS)
Li, Lifeng; Haggans, Charles W.
1992-01-01
Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.
Optical methods in nano-biotechnology
NASA Astrophysics Data System (ADS)
Bruno, Luigi; Gentile, Francesco
2016-01-01
A scientific theory is not a mathematical paradigm. It is a framework that explains natural facts and may predict future observations. A scientific theory may be modified, improved, or rejected. Science is less a collection of theories and more the process that brings either to deny some hypothesis, maintain or accept somehow universal beliefs (or disbeliefs), and create new models that may improve or replace precedent theories. This process cannot be entrusted to common sense, personal experiences or anecdotes (many precepts in physics are indeed counterintuitive), but on a rigorous design, observation and rational to statistical analysis of new experiments. Scientific results are always provisional: scientists rarely proclaim an absolute truth or absolute certainty. Uncertainty is inevitable at the frontiers of knowledge. Notably, this is the definition of the scientific method and what we have written in the above echoes the opinion Marcia McNutt who is the Editor of Science 'Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not'. A new discovery, a new theory that explains that discovery and the scientific method itself need observations, verifications and are susceptible of falsification.
Artificial grammar learning meets formal language theory: an overview
Fitch, W. Tecumseh; Friederici, Angela D.
2012-01-01
Formal language theory (FLT), part of the broader mathematical theory of computation, provides a systematic terminology and set of conventions for describing rules and the structures they generate, along with a rich body of discoveries and theorems concerning generative rule systems. Despite its name, FLT is not limited to human language, but is equally applicable to computer programs, music, visual patterns, animal vocalizations, RNA structure and even dance. In the last decade, this theory has been profitably used to frame hypotheses and to design brain imaging and animal-learning experiments, mostly using the ‘artificial grammar-learning’ paradigm. We offer a brief, non-technical introduction to FLT and then a more detailed analysis of empirical research based on this theory. We suggest that progress has been hampered by a pervasive conflation of distinct issues, including hierarchy, dependency, complexity and recursion. We offer clarifications of several relevant hypotheses and the experimental designs necessary to test them. We finally review the recent brain imaging literature, using formal languages, identifying areas of convergence and outstanding debates. We conclude that FLT has much to offer scientists who are interested in rigorous empirical investigations of human cognition from a neuroscientific and comparative perspective. PMID:22688631
Astashkin, Andrei V; Feng, Changjian
2015-11-12
The production of nitric oxide by the nitric oxide synthase (NOS) enzyme depends on the interdomain electron transfer (IET) between the flavin mononucleotide (FMN) and heme domains. Although the rate of this IET has been measured by laser flash photolysis (LFP) for various NOS proteins, no rigorous analysis of the relevant kinetic equations was performed so far. In this work, we provide an analytical solution of the kinetic equations underlying the LFP approach. The derived expressions reveal that the bulk IET rate is significantly affected by the conformational dynamics that determines the formation and dissociation rates of the docking complex between the FMN and heme domains. We show that in order to informatively study the electron transfer across the NOS enzyme, LFP should be used in combination with other spectroscopic methods that could directly probe the docking equilibrium and the conformational change rate constants. The implications of the obtained analytical expressions for the interpretation of the LFP results from various native and modified NOS proteins are discussed. The mathematical formulas derived in this work should also be applicable for interpreting the IET kinetics in other modular redox enzymes.
An application of Bayesian statistics to the extragalactic Cepheid distance scale
NASA Astrophysics Data System (ADS)
Barnes, Thomas G., III; Moffett, Thomas J.; Jefferys, W. H.; Forestell, Amy D.
2004-05-01
We have determined quasi-geometric distances to the Magellanic Clouds, M31 and M33. Our analysis uses a Bayesian statistical method to provide mathematically rigorous and objective solutions for individual Cepheids. We combine the individual distances with a hierarchial Bayesian model to determine the galactic distances. We obtain distance moduli 18.87 ± 0.07 mag (LMC, 12 stars), 19.14 ± 0.10 (SMC, 8 stars), 23.83 ± 0.35 mag (M33, 1 star) and 25.2 ± 0.6 mag (M31, 1 star) - all uncorrected for metallicity. The M31 and M33 distances are very preliminary. If the Pl relations of the LMC, SMC, and Galaxy are identical, our results exclude the metallicity effect in the V, (V - R) surface brightness method predicted by Hindsley and Bell (1989) at the 5σ level. Alternately, if Hindsley & Bell's prediction is adopted as true, we find a metallicity effect intrinsic to the Cepheid PL relation requiring a correction Δ(V - Mv) = (0.36 ± 0.07)Δ[A/H] mag. The latter has the opposite sign to other observational estimates of the Cepheid metallicity effect.
Spatial evolutionary games with weak selection.
Nanda, Mridu; Durrett, Richard
2017-06-06
Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all [Formula: see text] games, but there are a number of [Formula: see text] games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of [Formula: see text] games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock-paper-scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium.
Spatial evolutionary games with weak selection
Nanda, Mridu; Durrett, Richard
2017-01-01
Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all 2×2 games, but there are a number of 3×3 games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of 3×3 games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock–paper–scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium. PMID:28533405
Adaptive tracking control for active suspension systems with non-ideal actuators
NASA Astrophysics Data System (ADS)
Pan, Huihui; Sun, Weichao; Jing, Xingjian; Gao, Huijun; Yao, Jianyong
2017-07-01
As a critical component of transportation vehicles, active suspension systems are instrumental in the improvement of ride comfort and maneuverability. However, practical active suspensions commonly suffer from parameter uncertainties (e.g., the variations of payload mass and suspension component parameters), external disturbances and especially the unknown non-ideal actuators (i.e., dead-zone and hysteresis nonlinearities), which always significantly deteriorate the control performance in practice. To overcome these issues, this paper synthesizes an adaptive tracking control strategy for vehicle suspension systems to achieve suspension performance improvements. The proposed control algorithm is formulated by developing a unified framework of non-ideal actuators rather than a separate way, which is a simple yet effective approach to remove the unexpected nonlinear effects. From the perspective of practical implementation, the advantages of the presented controller for active suspensions include that the assumptions on the measurable actuator outputs, the prior knowledge of nonlinear actuator parameters and the uncertain parameters within a known compact set are not required. Furthermore, the stability of the closed-loop suspension system is theoretically guaranteed by rigorous mathematical analysis. Finally, the effectiveness of the presented adaptive control scheme is confirmed using comparative numerical simulation validations.
What can graph theory tell us about word learning and lexical retrieval?
Vitevitch, Michael S
2008-04-01
Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2017-11-01
Active cloaking in its basic form requires that the extinction cross-section (or energy efficiency) from a radiating body vanishes. In this analysis, this physical effect is demonstrated for an active cylindrically radiating acoustic source in a non-viscous fluid, undergoing periodic axisymmetric harmonic vibrations near a rigid corner (i.e., quarter-space). The rigorous multipole expansion method in cylindrical coordinates, the method of images, and the addition theorem of cylindrical wave functions are used to derive closed-form mathematical expressions for the radiating, amplification, and extinction cross-sections of the active source. Numerical computations are performed assuming monopole and dipole modal oscillations of the circular source. The results reveal some of the situations where the extinction energy efficiency factor of the active source vanishes depending on its size and location with respect to the rigid corner, thus, achieving total invisibility. Moreover, the extinction energy efficiency factor varies between positive or negative values. These effects also occur for higher-order modal oscillations of the active source. The results find potential applications in the development of acoustic cloaking devices and invisibility in underwater acoustics or other areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
This document examines the performance of a generic flat-mirror multimonochromatic imager (MMI), with special emphasis on existing instruments at NIF and Omega. We begin by deriving the standard equation for the mean number of photons detected per resolution element. The pinhole energy bandwidth is a contributing factor; this is dominated by the finite size of the source and may be considerable. The most common method for estimating the spatial resolution of such a system (quadrature addition) is, technically, mathematically invalid for this case. However, under the proper circumstances it may produce good estimates compared to a rigorous calculation based onmore » the convolution of point-spread functions. Diffraction is an important contribution to the spatial resolution. Common approximations based on Fraunhofer (farfield) diffraction may be inappropriate and misleading, as the instrument may reside in multiple regimes depending upon its configuration or the energy of interest. It is crucial to identify the correct diffraction regime; Fraunhofer and Fresnel (near-field) diffraction profiles are substantially different, the latter being considerably wider. Finally, we combine the photonics and resolution analyses to derive an expression for the minimum signal level such that the resulting images are not dominated by photon statistics. This analysis is consistent with observed performance of the NIF MMI.« less
Constructing Rigorous and Broad Biosurveillance Networks for Detecting Emerging Zoonotic Outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; Powell, Dennis; LaBute, Montiago; Hyman, James M.; Rivas, Ariel; Jankowski, Mark; Berendzen, Joel; Loeppky, Jason; Manore, Carrie; Fair, Jeanne
2015-01-01
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities of potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic. PMID:25946164
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
An Ecological Analysis of Mathematics Teachers' Noticing
ERIC Educational Resources Information Center
Jazby, Dan
2016-01-01
Most studies which investigate mathematics teacher noticing cast perception into a passive role. This study develops an ecological analysis of mathematics teachers' noticing in order to investigate how teachers actively look for information in classroom environments. This method of analysis is applied to data collected as an experienced primary…
Closed-Loop Control of Complex Networks: A Trade-Off between Time and Energy
NASA Astrophysics Data System (ADS)
Sun, Yong-Zheng; Leng, Si-Yang; Lai, Ying-Cheng; Grebogi, Celso; Lin, Wei
2017-11-01
Controlling complex nonlinear networks is largely an unsolved problem at the present. Existing works focus either on open-loop control strategies and their energy consumptions or on closed-loop control schemes with an infinite-time duration. We articulate a finite-time, closed-loop controller with an eye toward the physical and mathematical underpinnings of the trade-off between the control time and energy as well as their dependence on the network parameters and structure. The closed-loop controller is tested on a large number of real systems including stem cell differentiation, food webs, random ecosystems, and spiking neuronal networks. Our results represent a step forward in developing a rigorous and general framework to control nonlinear dynamical networks with a complex topology.
Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz
NASA Astrophysics Data System (ADS)
Nikitenko, Ya.
2016-11-01
To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.
Scheeline, Alexander
2017-10-01
Designing a spectrometer requires knowledge of the problem to be solved, the molecules whose properties will contribute to a solution of that problem and skill in many subfields of science and engineering. A seemingly simple problem, design of an ultraviolet, visible, and near-infrared spectrometer, is used to show the reasoning behind the trade-offs in instrument design. Rather than reporting a fully optimized instrument, the Yin and Yang of design choices, leading to decisions about financial cost, materials choice, resolution, throughput, aperture, and layout are described. To limit scope, aspects such as grating blaze, electronics design, and light sources are not presented. The review illustrates the mixture of mathematical rigor, rule of thumb, esthetics, and availability of components that contribute to the art of spectrometer design.
On the nonlinear stability of mKdV breathers
NASA Astrophysics Data System (ADS)
Alejo, Miguel A.; Muñoz, Claudio
2012-11-01
Breather modes of the mKdV equation on the real line are known to be elastic under collisions with other breathers and solitons. This fact indicates very strong stability properties of breathers. In this communication we describe a rigorous, mathematical proof of the stability of breathers under a class of small perturbations. Our proof involves the existence of a nonlinear equation satisfied by all breather profiles, and a new Lyapunov functional which controls the dynamics of small perturbations and instability modes. In order to construct such a functional, we work in a subspace of the energy one. However, our proof introduces new ideas in order to attack the corresponding stability problem in the energy space. Some remarks about the sine-Gordon case are also considered.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks
Besada, Juan A.
2017-01-01
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157
Deformation and instability of underthrusting lithospheric plates
NASA Technical Reports Server (NTRS)
Liu, H.
1972-01-01
Models of the underthrusting lithosphere are constructed for the calculation of displacement and deflection. First, a mathematical theory is developed that rigorously demonstrates the elastic instability in the decending lithosphere. The theory states that lithospheric thrust beneath island arcs becomes unstable and suffers deflection as the compression increases. Thus, in the neighborhood of the edges where the lithospheric plate plunges into the asthenosphere and mesosphere its shape will be contorted. Next, the lateral displacement is calculated, and it is shown that, before contortion, the plate will thicken and contract at different positions with the variation in thickness following a parabolic profile. Finally, the depth distribution of the intermediate and deep focus earthquakes is explained in terms of plate buckling and contortion.
Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely
2015-12-01
We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.
Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.
Duplantier, Bertrand; Sheffield, Scott
2009-04-17
We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 0
Formal specification and mechanical verification of SIFT - A fault-tolerant flight control system
NASA Technical Reports Server (NTRS)
Melliar-Smith, P. M.; Schwartz, R. L.
1982-01-01
The paper describes the methodology being employed to demonstrate rigorously that the SIFT (software-implemented fault-tolerant) computer meets its requirements. The methodology uses a hierarchy of design specifications, expressed in the mathematical domain of multisorted first-order predicate calculus. The most abstract of these, from which almost all details of mechanization have been removed, represents the requirements on the system for reliability and intended functionality. Successive specifications in the hierarchy add design and implementation detail until the PASCAL programs implementing the SIFT executive are reached. A formal proof that a SIFT system in a 'safe' state operates correctly despite the presence of arbitrary faults has been completed all the way from the most abstract specifications to the PASCAL program.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
ERIC Educational Resources Information Center
Artzt, Alice F.; Armour-Thomas, Eleanor
The roles of cognition and metacognition were examined in the mathematical problem-solving behaviors of students as they worked in small groups. As an outcome, a framework that links the literature of cognitive science and mathematical problem solving was developed for protocol analysis of mathematical problem solving. Within this framework, each…
Mathematics Teaching Anxiety and Self-Efficacy Beliefs toward Mathematics Teaching: A Path Analysis
ERIC Educational Resources Information Center
Peker, Murat
2016-01-01
The purpose of this study was to investigate the relationship between pre-service primary school teachers' mathematics teaching anxiety and their self-efficacy beliefs toward mathematics teaching through path analysis. There were a total of 250 pre-service primary school teachers involved in this study. Of the total, 202 were female and 48 were…
How to begin a new topic in mathematics: does it matter to students' performance in mathematics?
Ma, Xin; Papanastasiou, Constantinos
2006-08-01
The authors use Canadian data from the Third International Mathematics and Science Study to examine six instructional methods that mathematics teachers use to introduce new topics in mathematics on performance of eighth-grade students in six mathematical areas (mathematics as a whole, algebra, data analysis, fraction, geometry, and measurement). Results of multilevel analysis with students nested within schools show that the instructional methods of having the teacher explain the rules and definitions and looking at the textbook while the teacher talks about it had little instructional effects on student performance in any mathematical area. In contrast, the instructional method in which teachers try to solve an example related to the new topic was effective in promoting student performance across all mathematical areas.
Failure-Modes-And-Effects Analysis Of Software Logic
NASA Technical Reports Server (NTRS)
Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David
1996-01-01
Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.
Stories about Math: An Analysis of Students' Mathematical Autobiographies
ERIC Educational Resources Information Center
Latterell, Carmen M.; Wilson, Janelle L.
2016-01-01
This paper analyzes 16 preservice secondary mathematics education majors' mathematical autobiographies. Participants wrote about their previous experiences with mathematics. All participants discussed why they wanted to become mathematics teachers with the key factors being past experience with mathematics teachers, previous success in mathematics…
The DOZZ formula from the path integral
NASA Astrophysics Data System (ADS)
Kupiainen, Antti; Rhodes, Rémi; Vargas, Vincent
2018-05-01
We present a rigorous proof of the Dorn, Otto, Zamolodchikov, Zamolodchikov formula (the DOZZ formula) for the 3 point structure constants of Liouville Conformal Field Theory (LCFT) starting from a rigorous probabilistic construction of the functional integral defining LCFT given earlier by the authors and David. A crucial ingredient in our argument is a probabilistic derivation of the reflection relation in LCFT based on a refined tail analysis of Gaussian multiplicative chaos measures.
Meta-Analysis of Mathematic Basic-Fact Fluency Interventions: A Component Analysis
ERIC Educational Resources Information Center
Codding, Robin S.; Burns, Matthew K.; Lukito, Gracia
2011-01-01
Mathematics fluency is a critical component of mathematics learning yet few attempts have been made to synthesize this research base. Seventeen single-case design studies with 55 participants were reviewed using meta-analytic procedures. A component analysis of practice elements was conducted and treatment intensity and feasibility were examined.…
An Analysis of Problem-Posing Tasks in Chinese and US Elementary Mathematics Textbooks
ERIC Educational Resources Information Center
Cai, Jinfa; Jiang, Chunlian
2017-01-01
This paper reports on 2 studies that examine how mathematical problem posing is integrated in Chinese and US elementary mathematics textbooks. Study 1 involved a historical analysis of the problem-posing (PP) tasks in 3 editions of the most widely used elementary mathematics textbook series published by People's Education Press in China over 3…
Assessment of Teaching Approaches in an Introductory Astronomy College Classroom
NASA Astrophysics Data System (ADS)
Alexander, William R.
In recent years, there have been calls from the astronomy education research community for the increased use of learner-centered approaches to teaching, and systematic assessments of various teaching approaches using such tools as the Astronomy Diagnostic Test 2.0 (ADT 2.0). The research presented is a response to both calls. The ADT 2.0 was used in a modified form to obtain baseline assessments of introductory college astronomy classes that were taught in a traditional, mostly didactic manner. The ADT 2.0 (modified) was administered both before and after the completion of the courses. The courses were then altered to make modest use of learner-centered lecture tutorials. The ADT 2.0 (modified) was again administered before and after completion of the modified courses. Overall, the modest learner-centered approach showed mixed statistical results, with an increase in effect size (from medium to large), but no change in normalized gain index (both were low). Additionally, a mathematically rigorous approach showed no statistically significant improvements in conceptual understanding compared with a mathematically nonrigorous approach. This study will interpret the results from a variety of perspectives. The overall implementation of the lecture tutorials and their implications for teaching will also be discussed.
NASA Astrophysics Data System (ADS)
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
Fundamentals of Geophysical Fluid Dynamics
NASA Astrophysics Data System (ADS)
McWilliams, James C.
2006-07-01
Earth's atmosphere and oceans exhibit complex patterns of fluid motion over a vast range of space and time scales. These patterns combine to establish the climate in response to solar radiation that is inhomogeneously absorbed by the materials comprising air, water, and land. Spontaneous, energetic variability arises from instabilities in the planetary-scale circulations, appearing in many different forms such as waves, jets, vortices, boundary layers, and turbulence. Geophysical fluid dynamics (GFD) is the science of all these types of fluid motion. This textbook is a concise and accessible introduction to GFD for intermediate to advanced students of the physics, chemistry, and/or biology of Earth's fluid environment. The book was developed from the author's many years of teaching a first-year graduate course at the University of California, Los Angeles. Readers are expected to be familiar with physics and mathematics at the level of general dynamics (mechanics) and partial differential equations. Covers the essential GFD required for atmospheric science and oceanography courses Mathematically rigorous, concise coverage of basic theory and applications to both oceans and atmospheres Author is a world expert; this book is based on the course he has taught for many years Exercises are included, with solutions available to instructors from solutions@cambridge.org
Mathematical Models for Controlled Drug Release Through pH-Responsive Polymeric Hydrogels.
Manga, Ramya D; Jha, Prateek K
2017-02-01
Hydrogels consisting of weakly charged acidic/basic groups are ideal candidates for carriers in oral delivery, as they swell in response to pH changes in the gastrointestinal tract, resulting in drug entrapment at low pH conditions of the stomach and drug release at high pH conditions of the intestine. We have developed 1-dimensional mathematical models to study the drug release behavior through pH-responsive hydrogels. Models are developed for 3 different cases that vary in the level of rigor, which together can be applied to predict both in vitro (drug release from carrier) and in vivo (drug concentration in the plasma) behavior of hydrogel-drug formulations. A detailed study of the effect of hydrogel and drug characteristics and physiological conditions is performed to gain a fundamental insight into the drug release behavior, which may be useful in the design of pH-responsive drug carriers. Finally, we describe a successful application of these models to predict both in vitro and in vivo behavior of docetaxel-loaded micelle in a pH-responsive hydrogel, as reported in a recent experimental study. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Scientific approaches to science policy.
Berg, Jeremy M
2013-11-01
The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
ERIC Educational Resources Information Center
Smith, Derrick W.; Smothers, Sinikka M.
2012-01-01
Introduction: The purpose of the study presented here was to determine how well tactile graphics (specifically data analysis graphs) in secondary mathematics and science braille textbooks correlated with the print graphics. Method: A content analysis was conducted on 598 separate data analysis graphics from 10 mathematics and science textbooks.…
An Evaluation of Grades 9 and 10 Mathematics Textbooks vis-a-vis Fostering Problem Solving Skills
ERIC Educational Resources Information Center
Buishaw, Alemayehu; Ayalew, Assaye
2013-01-01
This study sought to evaluate the adequacy of integration of problematic situations and general problem-solving strategies (heuristics) in grades 9 and 10 mathematics textbooks. Grade 9 and grade 10 mathematics textbooks were used for analysis. Document analysis and interview were used as data gathering instruments. Document analysis was carried…
Developing a Study Orientation Questionnaire in Mathematics for primary school students.
Maree, Jacobus G; Van der Walt, Martha S; Ellis, Suria M
2009-04-01
The Study Orientation Questionnaire in Mathematics (Primary) is being developed as a diagnostic measure for South African teachers and counsellors to help primary school students improve their orientation towards the study of mathematics. In this study, participants were primary school students in the North-West Province of South Africa. During the standardisation in 2007, 1,013 students (538 boys: M age = 12.61; SD = 1.53; 555 girls: M age = 11.98; SD = 1.35; 10 missing values) were assessed. Factor analysis yielded three factors. Analysis also showed satisfactory reliability coefficients and item-factor correlations. Step-wise linear regression indicated that three factors (Mathematics anxiety, Study attitude in mathematics, and Study habits in mathematics) contributed significantly (R2 = .194) to predicting achievement in mathematics as measured by the Basic Mathematics Questionnaire (Primary).
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2009
2009-01-01
In Fall 2008, the Council of Chief State School Officers (CCSSO) conducted an alignment content analysis of the 2007 TIMSS Mathematics and Science education assessments for students at grades 4 and 8 and the 2006 PISA Mathematics and Science Literacy assessments for students at age 15 (i.e., TIMSS--Trends in Mathematics and Science Study,…
Mechanical properties of frog skeletal muscles in iodoacetic acid rigor.
Mulvany, M J
1975-01-01
1. Methods have been developed for describing the length: tension characteristics of frog skeletal muscles which go into rigor at 4 degrees C following iodoacetic acid poisoning either in the presence of Ca2+ (Ca-rigor) or its absence (Ca-free-rigor). 2. Such rigor muscles showed less resistance to slow stretch (slow rigor resistance) that to fast stretch (fast rigor resistance). The slow and fast rigor resistances of Ca-free-rigor muscles were much lower than those of Ca-rigor muscles. 3. The slow rigor resistance of Ca-rigor muscles was proportional to the amount of overlap between the contractile filaments present when the muscles were put into rigor. 4. Withdrawing Ca2+ from Ca-rigor muscles (induced-Ca-free rigor) reduced their slow and fast rigor resistances. Readdition of Ca2+ (but not Mg2+, Mn2+ or Sr2+) reversed the effect. 5. The slow and fast rigor resistances of Ca-rigor muscles (but not of Ca-free-rigor muscles) decreased with time. 6.The sarcomere structure of Ca-rigor and induced-Ca-free rigor muscles stretched by 0.2lo was destroyed in proportion to the amount of stretch, but the lengths of the remaining intact sarcomeres were essentially unchanged. This suggests that there had been a successive yielding of the weakeast sarcomeres. 7. The difference between the slow and fast rigor resistance and the effect of calcium on these resistances are discussed in relation to possible variations in the strength of crossbridges between the thick and thin filaments. Images Plate 1 Plate 2 PMID:1082023
Computer analysis of lighting style in fine art: steps towards inter-artist studies
NASA Astrophysics Data System (ADS)
Stork, David G.
2011-03-01
Stylometry in visual art-the mathematical description of artists' styles - has been based on a number of properties of works, such as color, brush stroke shape, visual texture, and measures of contours' curvatures. We introduce the concept of quantitative measures of lighting, such as statistical descriptions of spatial coherence, diuseness, and so forth, as properties of artistic style. Some artists of the high Renaissance, such as Leonardo, worked from nature and strove to render illumination "faithfully" photorealists, such as Richard Estes, worked from photographs and duplicated the "physics based" lighting accurately. As such, each had dierent motivations, methodologies, stagings, and "accuracies" in rendering lighting clues. Perceptual studies show that observers are poor judges of properties of lighting in photographs such as consistency (and thus by extension in paintings as well); computer methods such as rigorous cast-shadow analysis, occluding-contour analysis and spherical harmonic based estimation of light fields can be quite accurate. For this reasons, computer lighting analysis can provide a new tools for art historical studies. We review lighting analysis in paintings such as Vermeer's Girl with a pearl earring, de la Tour's Christ in the carpenter's studio, Caravaggio's Magdalen with the smoking flame and Calling of St. Matthew) and extend our corpus to works where lighting coherence is of interest to art historians, such as Caravaggio's Adoration of the Shepherds or Nativity (1609) in the Capuchin church of Santa Maria degli Angeli. Our measure of lighting coherence may help reveal the working methods of some artists and in diachronic studies of individual artists. We speculate on artists and art historical questions that may ultimately profit from future renements to these new computational tools.
Mathematics is always invisible, Professor Dowling
NASA Astrophysics Data System (ADS)
Cable, John
2015-09-01
This article provides a critical evaluation of a technique of analysis, the Social Activity Method, recently offered by Dowling (2013) as a `gift' to mathematics education. The method is found to be inadequate, firstly, because it employs a dichotomy (between `expression' and `content') instead of a finer analysis (into symbols, concepts and setting or phenomena), and, secondly, because the distinction between `public' and `esoteric' mathematics, although interesting, is allowed to obscure the structure of the mathematics itself. There is also criticism of what Dowling calls the `myth of participation', which denies the intimate links between mathematics and the rest of the universe that lie at the heart of mathematical pedagogy. Behind all this lies Dowling's `essentially linguistic' conception of mathematics, which is criticised on the dual ground that it ignores the chastening experience of formalism in mathematical philosophy and that linguistics itself has taken a wrong turn and ignores lessons that might be learnt from mathematics education.
Teacher's Guide to Secondary Mathematics.
ERIC Educational Resources Information Center
Duval County Schools, Jacksonville, FL.
This is a teacher's guide to secondary school mathematics. Developed for use in the Duval County Public Schools, Jacksonville, Florida. Areas of mathematics covered are algebra, analysis, calculus, computer literacy, computer science, geometry, analytic geometry, general mathematics, consumer mathematics, pre-algebra, probability and statistics,…
A structural equation modeling analysis of students' understanding in basic mathematics
NASA Astrophysics Data System (ADS)
Oktavia, Rini; Arif, Salmawaty; Ferdhiana, Ridha; Yuni, Syarifah Meurah; Ihsan, Mahyus
2017-11-01
This research, in general, aims to identify incoming students' understanding and misconceptions of several basic concepts in mathematics. The participants of this study are the 2015 incoming students of Faculty of Mathematics and Natural Science of Syiah Kuala University, Indonesia. Using an instrument that were developed based on some anecdotal and empirical evidences on students' misconceptions, a survey involving 325 participants was administered and several quantitative and qualitative analysis of the survey data were conducted. In this article, we discuss the confirmatory factor analysis using Structural Equation Modeling (SEM) on factors that determine the new students' overall understanding of basic mathematics. The results showed that students' understanding on algebra, arithmetic, and geometry were significant predictors for their overall understanding of basic mathematics. This result supported that arithmetic and algebra are not the only predictors of students' understanding of basic mathematics.
ERIC Educational Resources Information Center
Lein, Amy E.
2016-01-01
This meta-analysis synthesized the findings from 23 published and five unpublished experimental or quasi-experimental group design studies on word problem-solving instruction for K-12 students with learning disabilities (LD) and mathematics difficulties (MD). A secondary purpose of this meta-analysis was to analyze the relation between treatment…
Measuring Developmental Students' Mathematics Anxiety
ERIC Educational Resources Information Center
Ding, Yanqing
2016-01-01
This study conducted an item-level analysis of mathematics anxiety and examined the dimensionality of mathematics anxiety in a sample of developmental mathematics students (N = 162) by Multi-dimensional Random Coefficients Multinominal Logit Model (MRCMLM). The results indicate a moderately correlated factor structure of mathematics anxiety (r =…
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
DOT National Transportation Integrated Search
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
ERIC Educational Resources Information Center
Lee, Hye Jung; Kim, Jihyun
2016-01-01
The objective of this study is to examine the structural relationships among variables that predict the mathematical ability of young children, namely young children's mathematical attitude, exposure to private mathematical learning, mothers' view about their children's mathematical learning, and mothers' mathematical attitude. To this end, we…
ERIC Educational Resources Information Center
Okigbo, Ebele C.; Osuafor, Abigail M.
2008-01-01
The study investigated the effect of using mathematics laboratory in teaching on students' achievement in Junior Secondary School Mathematics. A total of 100 JS 3 Mathematics students were involved in the study. The study is a quasi-experimental research. Results were analyzed using mean, standard deviation and analysis of covariance (ANCOVA).…
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Kowalski, Karol
2009-05-21
In this article we discuss the problem of proper balancing of the noniterative corrections to the ground- and excited-state energies obtained with approximate coupled cluster (CC) and equation-of-motion CC (EOMCC) approaches. It is demonstrated that for a class of excited states dominated by single excitations and for states with medium doubly excited component, the newly introduced nested variant of the method of moments of CC equations provides mathematically rigorous way of balancing the ground- and excited-state correlation effects. The resulting noniterative methodology accounting for the effect of triples is tested using its parallel implementation on the systems, for which iterative CC/EOMCC calculations with full inclusion of triply excited configurations or their most important subset are numerically feasible.
NASA Astrophysics Data System (ADS)
Lachieze-Rey, Marc
This book delivers a quantitative account of the science of cosmology, designed for a non-specialist audience. The basic principles are outlined using simple maths and physics, while still providing rigorous models of the Universe. It offers an ideal introduction to the key ideas in cosmology, without going into technical details. The approach used is based on the fundamental ideas of general relativity such as the spacetime interval, comoving coordinates, and spacetime curvature. It provides an up-to-date and thoughtful discussion of the big bang, and the crucial questions of structure and galaxy formation. Questions of method and philosophical approaches in cosmology are also briefly discussed. Advanced undergraduates in either physics or mathematics would benefit greatly from use either as a course text or as a supplementary guide to cosmology courses.
An advanced kinetic theory for morphing continuum with inner structures
NASA Astrophysics Data System (ADS)
Chen, James
2017-12-01
Advanced kinetic theory with the Boltzmann-Curtiss equation provides a promising tool for polyatomic gas flows, especially for fluid flows containing inner structures, such as turbulence, polyatomic gas flows and others. Although a Hamiltonian-based distribution function was proposed for diatomic gas flow, a general distribution function for the generalized Boltzmann-Curtiss equations and polyatomic gas flow is still out of reach. With assistance from Boltzmann's entropy principle, a generalized Boltzmann-Curtiss distribution for polyatomic gas flow is introduced. The corresponding governing equations at equilibrium state are derived and compared with Eringen's morphing (micropolar) continuum theory derived under the framework of rational continuum thermomechanics. Although rational continuum thermomechanics has the advantages of mathematical rigor and simplicity, the presented statistical kinetic theory approach provides a clear physical picture for what the governing equations represent.
A New Approach to Estimate the Age of the Earth and the Age of the Universe
NASA Astrophysics Data System (ADS)
Ben Salem, Kamel
2011-01-01
In a previous article, we proposed estimations for the age of the Universe and for the date of stabilization of its general structure on the basis of a given age of the Earth equal to 4.6 billion years. In the present article, we propose a new approach to estimate more accurately and at the same time, the age of the Earth and that of the Universe, starting from verse 4 of Sura 70 of the Qur'an. The procedure we followed and which is detailed in this article, should in our view, contribute to enlighten the debate on the question. We must add that our approach can in no case be considered as based on "concordism" or conjecture. Indeed, it rests on rigorous mathematical computations.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
From Faddeev-Kulish to LSZ. Towards a non-perturbative description of colliding electrons
NASA Astrophysics Data System (ADS)
Dybalski, Wojciech
2017-12-01
In a low energy approximation of the massless Yukawa theory (Nelson model) we derive a Faddeev-Kulish type formula for the scattering matrix of N electrons and reformulate it in LSZ terms. To this end, we perform a decomposition of the infrared finite Dollard modifier into clouds of real and virtual photons, whose infrared divergencies mutually cancel. We point out that in the original work of Faddeev and Kulish the clouds of real photons are omitted, and consequently their wave-operators are ill-defined on the Fock space of free electrons. To support our observations, we compare our final LSZ expression for N = 1 with a rigorous non-perturbative construction due to Pizzo. While our discussion contains some heuristic steps, they can be formulated as clear-cut mathematical conjectures.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
The 1/ N Expansion of Tensor Models Beyond Perturbation Theory
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2014-09-01
We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.
Analytic theory of orbit contraction and ballistic entry into planetary atmospheres
NASA Technical Reports Server (NTRS)
Longuski, J. M.; Vinh, N. X.
1980-01-01
A space object traveling through an atmosphere is governed by two forces: aerodynamic and gravitational. On this premise, equations of motion are derived to provide a set of universal entry equations applicable to all regimes of atmospheric flight from orbital motion under the dissipate force of drag through the dynamic phase of reentry, and finally to the point of contact with the planetary surface. Rigorous mathematical techniques such as averaging, Poincare's method of small parameters, and Lagrange's expansion, applied to obtain a highly accurate, purely analytic theory for orbit contraction and ballistic entry into planetary atmospheres. The theory has a wide range of applications to modern problems including orbit decay of artificial satellites, atmospheric capture of planetary probes, atmospheric grazing, and ballistic reentry of manned and unmanned space vehicles.