The role of a posteriori mathematics in physics
NASA Astrophysics Data System (ADS)
MacKinnon, Edward
2018-05-01
The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.
On making cuts for magnetic scalar potentials in multiply connected regions
NASA Astrophysics Data System (ADS)
Kotiuga, P. R.
1987-04-01
The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.
Oakland and San Francisco Create Course Pathways through Common Core Mathematics. White Paper
ERIC Educational Resources Information Center
Daro, Phil
2014-01-01
The Common Core State Standards for Mathematics (CCSS-M) set rigorous standards for each of grades 6, 7 and 8. Strategic Education Research Partnership (SERP) has been working with two school districts, Oakland Unified School District and San Francisco Unified School District, to evaluate extant policies and practices and formulate new policies…
Continuum mechanics and thermodynamics in the Hamilton and the Godunov-type formulations
NASA Astrophysics Data System (ADS)
Peshkov, Ilya; Pavelka, Michal; Romenski, Evgeniy; Grmela, Miroslav
2018-01-01
Continuum mechanics with dislocations, with the Cattaneo-type heat conduction, with mass transfer, and with electromagnetic fields is put into the Hamiltonian form and into the form of the Godunov-type system of the first-order, symmetric hyperbolic partial differential equations (SHTC equations). The compatibility with thermodynamics of the time reversible part of the governing equations is mathematically expressed in the former formulation as degeneracy of the Hamiltonian structure and in the latter formulation as the existence of a companion conservation law. In both formulations the time irreversible part represents gradient dynamics. The Godunov-type formulation brings the mathematical rigor (the local well posedness of the Cauchy initial value problem) and the possibility to discretize while keeping the physical content of the governing equations (the Godunov finite volume discretization).
A finite element-boundary integral method for cavities in a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. However, due to a lack of rigorous mathematical models for conformal antenna arrays, antenna designers resort to measurement and planar antenna concepts for designing non-planar conformal antennas. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We extend this formulation to conformal arrays on large metallic cylinders. In this report, we develop the mathematical formulation. In particular, we discuss the shape functions, the resulting finite elements and the boundary integral equations, and the solution of the conformal finite element-boundary integral system. Some validation results are presented and we further show how this formulation can be applied with minimal computational and memory resources.
A Curricular-Sampling Approach to Progress Monitoring: Mathematics Concepts and Applications
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Fuchs, Douglas; Zumeta, Rebecca O.
2008-01-01
Progress monitoring is an important component of effective instructional practice. Curriculum-based measurement (CBM) is a form of progress monitoring that has been the focus of rigorous research. Two approaches for formulating CBM systems exist. The first is to assess performance regularly on a task that serves as a global indicator of competence…
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.; Woo, Alex C.; Yu, C. Long
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This is due to the lack of rigorous mathematical models for conformal antenna arrays, and as a result the design of conformal arrays is primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. Herewith we shall extend this formulation for conformal arrays on large metallic cylinders. In this we develop the mathematical formulation. In particular we discuss the finite element equations, the shape elements, and the boundary integral evaluation, and it is shown how this formulation can be applied with minimal computation and memory requirements. The implementation shall be discussed in a later report.
A finite element-boundary integral method for conformal antenna arrays on a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This was due to the lack of rigorous mathematical models for conformal antenna arrays. As a result, the design of conformal arrays was primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We are extending this formulation to conformal arrays on large metallic cylinders. In doing so, we will develop a mathematical formulation. In particular, we discuss the finite element equations, the shape elements, and the boundary integral evaluation. It is shown how this formulation can be applied with minimal computation and memory requirements.
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
Observations of fallibility in applications of modern programming methodologies
NASA Technical Reports Server (NTRS)
Gerhart, S. L.; Yelowitz, L.
1976-01-01
Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.
Which Kind of Mathematics for Quantum Mechanics? the Relevance of H. Weyl's Program of Research
NASA Astrophysics Data System (ADS)
Drago, Antonino
In 1918 Weyl's book Das Kontinuum planned to found anew mathematics upon more conservative bases than both rigorous mathematics and set theory. It gave birth to the so-called Weyl's elementary mathematics, i.e. an intermediate mathematics between the mathematics rejecting at all actual infinity and the classical one including it almost freely. The present paper scrutinises the subsequent Weyl's book Gruppentheorie und Quantenmechanik (1928) as a program for founding anew theoretical physics - through quantum theory - and at the same time developing his mathematics through an improvement of group theory; which, according to Weyl, is a mathematical theory effacing the old distinction between discrete and continuous mathematics. Evidence from Weyl's writings is collected for supporting this interpretation. Then Weyl's program is evaluated as unsuccessful, owing to some crucial difficulties of both physical and mathematical nature. The present clear-cut knowledge of Weyl's elementary mathematics allows us to re-evaluate Weyl's program in order to look for more adequate formulations of quantum mechanics in any weaker kind of mathematics than the classical one.
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram
1989-01-01
The extension of the known flux-vector and flux-difference splittings to real gases via rigorous mathematical procedures is demonstrated. Formulations of both equilibrium and finite-rate chemistry for real-gas flows are described, with emphasis on derivations of finite-rate chemistry. Split-flux formulas from other authors are examined. A second-order upwind-based TVD scheme is adopted to eliminate oscillations and to obtain a sharp representation of discontinuities.
ERIC Educational Resources Information Center
Petrilli, Salvatore John, Jr.
2009-01-01
Historians of mathematics considered the nineteenth century to be the Golden Age of mathematics. During this time period many areas of mathematics, such as algebra and geometry, were being placed on rigorous foundations. Another area of mathematics which experienced fundamental change was analysis. The drive for rigor in calculus began in 1797…
Collisional damping rates for plasma waves
NASA Astrophysics Data System (ADS)
Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.
2016-06-01
The distinction between the plasma dynamics dominated by collisional transport versus collective processes has never been rigorously addressed until recently. A recent paper [P. H. Yoon et al., Phys. Rev. E 93, 033203 (2016)] formulates for the first time, a unified kinetic theory in which collective processes and collisional dynamics are systematically incorporated from first principles. One of the outcomes of such a formalism is the rigorous derivation of collisional damping rates for Langmuir and ion-acoustic waves, which can be contrasted to the heuristic customary approach. However, the results are given only in formal mathematical expressions. The present brief communication numerically evaluates the rigorous collisional damping rates by considering the case of plasma particles with Maxwellian velocity distribution function so as to assess the consequence of the rigorous formalism in a quantitative manner. Comparison with the heuristic ("Spitzer") formula shows that the accurate damping rates are much lower in magnitude than the conventional expression, which implies that the traditional approach over-estimates the importance of attenuation of plasma waves by collisional relaxation process. Such a finding may have a wide applicability ranging from laboratory to space and astrophysical plasmas.
A new mathematical formulation of the line-by-line method in case of weak line overlapping
NASA Technical Reports Server (NTRS)
Ishov, Alexander G.; Krymova, Natalie V.
1994-01-01
A rigorous mathematical proof is presented for multiline representation on the equivalent width of a molecular band which consists in the general case of n overlapping spectral lines. The multiline representation includes a principal term and terms of minor significance. The principal term is the equivalent width of the molecular band consisting of the same n nonoverlapping spectral lines. The terms of minor significance take into consideration the overlapping of two, three and more spectral lines. They are small in case of the weak overlapping of spectral lines in the molecular band. The multiline representation can be easily generalized for optically inhomogeneous gas media and holds true for combinations of molecular bands. If the band lines overlap weakly the standard formulation of line-by-line method becomes too labor-consuming. In this case the multiline representation permits line-by-line calculations to be performed more effectively. Other useful properties of the multiline representation are pointed out.
ERIC Educational Resources Information Center
Utah State Office of Education, 2011
2011-01-01
Utah has adopted more rigorous mathematics standards known as the Utah Mathematics Core Standards. They are the foundation of the mathematics curriculum for the State of Utah. The standards include the skills and understanding students need to succeed in college and careers. They include rigorous content and application of knowledge and reflect…
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Intelligent control of a planning system for astronaut training.
Ortiz, J; Chen, G
1999-07-01
This work intends to design, analyze and solve, from the systems control perspective, a complex, dynamic, and multiconstrained planning system for generating training plans for crew members of the NASA-led International Space Station. Various intelligent planning systems have been developed within the framework of artificial intelligence. These planning systems generally lack a rigorous mathematical formalism to allow a reliable and flexible methodology for their design, modeling, and performance analysis in a dynamical, time-critical, and multiconstrained environment. Formulating the planning problem in the domain of discrete-event systems under a unified framework such that it can be modeled, designed, and analyzed as a control system will provide a self-contained theory for such planning systems. This will also provide a means to certify various planning systems for operations in the dynamical and complex environments in space. The work presented here completes the design, development, and analysis of an intricate, large-scale, and representative mathematical formulation for intelligent control of a real planning system for Space Station crew training. This planning system has been tested and used at NASA-Johnson Space Center.
Lenas, Petros; Moos, Malcolm; Luyten, Frank P
2009-12-01
The field of tissue engineering is moving toward a new concept of "in vitro biomimetics of in vivo tissue development." In Part I of this series, we proposed a theoretical framework integrating the concepts of developmental biology with those of process design to provide the rules for the design of biomimetic processes. We named this methodology "developmental engineering" to emphasize that it is not the tissue but the process of in vitro tissue development that has to be engineered. To formulate the process design rules in a rigorous way that will allow a computational design, we should refer to mathematical methods to model the biological process taking place in vitro. Tissue functions cannot be attributed to individual molecules but rather to complex interactions between the numerous components of a cell and interactions between cells in a tissue that form a network. For tissue engineering to advance to the level of a technologically driven discipline amenable to well-established principles of process engineering, a scientifically rigorous formulation is needed of the general design rules so that the behavior of networks of genes, proteins, or cells that govern the unfolding of developmental processes could be related to the design parameters. Now that sufficient experimental data exist to construct plausible mathematical models of many biological control circuits, explicit hypotheses can be evaluated using computational approaches to facilitate process design. Recent progress in systems biology has shown that the empirical concepts of developmental biology that we used in Part I to extract the rules of biomimetic process design can be expressed in rigorous mathematical terms. This allows the accurate characterization of manufacturing processes in tissue engineering as well as the properties of the artificial tissues themselves. In addition, network science has recently shown that the behavior of biological networks strongly depends on their topology and has developed the necessary concepts and methods to describe it, allowing therefore a deeper understanding of the behavior of networks during biomimetic processes. These advances thus open the door to a transition for tissue engineering from a substantially empirical endeavor to a technology-based discipline comparable to other branches of engineering.
Gibiansky, Leonid; Gibiansky, Ekaterina
2018-02-01
The emerging discipline of mathematical pharmacology occupies the space between advanced pharmacometrics and systems biology. A characteristic feature of the approach is application of advance mathematical methods to study the behavior of biological systems as described by mathematical (most often differential) equations. One of the early application of mathematical pharmacology (that was not called this name at the time) was formulation and investigation of the target-mediated drug disposition (TMDD) model and its approximations. The model was shown to be remarkably successful, not only in describing the observed data for drug-target interactions, but also in advancing the qualitative and quantitative understanding of those interactions and their role in pharmacokinetic and pharmacodynamic properties of biologics. The TMDD model in its original formulation describes the interaction of the drug that has one binding site with the target that also has only one binding site. Following the framework developed earlier for drugs with one-to-one binding, this work aims to describe a rigorous approach for working with similar systems and to apply it to drugs that bind to targets with two binding sites. The quasi-steady-state, quasi-equilibrium, irreversible binding, and Michaelis-Menten approximations of the model are also derived. These equations can be used, in particular, to predict concentrations of the partially bound target (RC). This could be clinically important if RC remains active and has slow internalization rate. In this case, introduction of the drug aimed to suppress target activity may lead to the opposite effect due to RC accumulation.
Quantum probability and quantum decision-making.
Yukalov, V I; Sornette, D
2016-01-13
A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).
ERIC Educational Resources Information Center
Easey, Michael
2013-01-01
This paper explores the decline in boys' participation in post-compulsory rigorous mathematics using the perspectives of eight experienced teachers at an independent, boys' College located in Brisbane, Queensland. This study coincides with concerns regarding the decline in suitably qualified tertiary graduates with requisite mathematical skills…
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS
NASA Astrophysics Data System (ADS)
Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.
2017-04-01
To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
Higher order temporal finite element methods through mixed formalisms.
Kim, Jinkyu
2014-01-01
The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.
Mathematical Rigor vs. Conceptual Change: Some Early Results
NASA Astrophysics Data System (ADS)
Alexander, W. R.
2003-05-01
Results from two different pedagogical approaches to teaching introductory astronomy at the college level will be presented. The first of these approaches is a descriptive, conceptually based approach that emphasizes conceptual change. This descriptive class is typically an elective for non-science majors. The other approach is a mathematically rigorous treatment that emphasizes problem solving and is designed to prepare students for further study in astronomy. The mathematically rigorous class is typically taken by science majors. It also fulfills an elective science requirement for these science majors. The Astronomy Diagnostic Test version 2 (ADT 2.0) was used as an assessment instrument since the validity and reliability have been investigated by previous researchers. The ADT 2.0 was administered as both a pre-test and post-test to both groups. Initial results show no significant difference between the two groups in the post-test. However, there is a slightly greater improvement for the descriptive class between the pre and post testing compared to the mathematically rigorous course. There was great care to account for variables. These variables included: selection of text, class format as well as instructor differences. Results indicate that the mathematically rigorous model, doesn't improve conceptual understanding any better than the conceptual change model. Additional results indicate that there is a similar gender bias in favor of males that has been measured by previous investigators. This research has been funded by the College of Science and Mathematics at James Madison University.
Matter Gravitates, but Does Gravity Matter?
ERIC Educational Resources Information Center
Groetsch, C. W.
2011-01-01
The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…
Mathematics interventions for children and adolescents with Down syndrome: a research synthesis.
Lemons, C J; Powell, S R; King, S A; Davidson, K A
2015-08-01
Many children and adolescents with Down syndrome fail to achieve proficiency in mathematics. Researchers have suggested that tailoring interventions based on the behavioural phenotype may enhance efficacy. The research questions that guided this review were (1) what types of mathematics interventions have been empirically evaluated with children and adolescents with Down syndrome?; (2) do the studies demonstrate sufficient methodological rigor?; (3) is there evidence of efficacy for the evaluated mathematics interventions?; and (4) to what extent have researchers considered aspects of the behavioural phenotype in selecting, designing and/or implementing mathematics interventions for children and adolescents with Down syndrome? Nine studies published between 1989 and 2012 were identified for inclusion. Interventions predominantly focused on early mathematics skills and reported positive outcomes. However, no study met criteria for methodological rigor. Further, no authors explicitly considered the behavioural phenotype. Additional research using rigorous experimental designs is needed to evaluate the efficacy of mathematics interventions for children and adolescents with Down syndrome. Suggestions for considering the behavioural phenotype in future research are provided. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
A global solution to the Schrödinger equation: From Henstock to Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu
2015-09-15
One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less
An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-09-01
Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-03-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.
The Markov process admits a consistent steady-state thermodynamic formalism
NASA Astrophysics Data System (ADS)
Peng, Liangrong; Zhu, Yi; Hong, Liu
2018-01-01
The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.
A Constructive Response to "Where Mathematics Comes From."
ERIC Educational Resources Information Center
Schiralli, Martin; Sinclair, Nathalie
2003-01-01
Reviews the Lakoff and Nunez's book, "Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (2000)," which provided many mathematics education researchers with a novel and startling perspective on mathematical thinking. Suggests that several of the book's flaws can be addressed through a more rigorous establishment of…
Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors
ERIC Educational Resources Information Center
Brogt, Erik; Draeger, John D.
2015-01-01
We discuss a model of academic rigor and apply this to a general education introductory astronomy course. We argue that even without central tenets of professional astronomy-the use of mathematics--the course can still be considered academically rigorous when expectations, goals, assessments, and curriculum are properly aligned.
Sukumaran, Anuraj T; Holtcamp, Alexander J; Campbell, Yan L; Burnett, Derris; Schilling, Mark W; Dinh, Thu T N
2018-06-07
The objective of this study was to determine the effects of deboning time (pre- and post-rigor), processing steps (grinding - GB; salting - SB; batter formulation - BB), and storage time on the quality of raw beef mixtures and vacuum-packaged cooked sausage, produced using a commercial formulation with 0.25% phosphate. The pH was greater in pre-rigor GB and SB than in post-rigor GB and SB (P < .001). However, deboning time had no effect on metmyoglobin reducing activity, cooking loss, and color of raw beef mixtures. Protein solubility of pre-rigor beef mixtures (124.26 mg/kg) was greater than that of post-rigor beef (113.93 mg/kg; P = .071). TBARS were increased in BB but decreased during vacuum storage of cooked sausage (P ≤ .018). Except for chewiness and saltiness being 52.9 N-mm and 0.3 points greater in post-rigor sausage (P = .040 and 0.054, respectively), texture profile analysis and trained panelists detected no difference in texture between pre- and post-rigor sausage. Published by Elsevier Ltd.
Rigorous Science: a How-To Guide.
Casadevall, Arturo; Fang, Ferric C
2016-11-08
Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word "rigor" is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. Copyright © 2016 Casadevall and Fang.
David crighton, 1942-2000: a commentary on his career and his influence on aeroacoustic theory
NASA Astrophysics Data System (ADS)
Ffowcs Williams, John E.
David Crighton, a greatly admired figure in fluid mechanics, Head of the Department of Applied Mathematics and Theoretical Physics at Cambridge, and Master of Jesus College, Cambridge, died at the peak of his career. He had made important contributions to the theory of waves generated by unsteady flow. Crighton's work was always characterized by the application of rigorous mathematical approximations to fluid mechanical idealizations of practically relevant problems. At the time of his death, he was certainly the most influential British applied mathematical figure, and his former collaborators and students form a strong school that continues his special style of mathematical application. Rigorous analysis of well-posed aeroacoustical problems was transformed by David Crighton.
What We Do: A Multiple Case Study from Mathematics Coaches' Perspectives
ERIC Educational Resources Information Center
Kane, Barbara Ann
2013-01-01
Teachers face new challenges when they teach a more rigorous mathematics curriculum than one to which they are accustomed. The rationale for this particular study originated from watching teachers struggle with understanding mathematical content and pedagogical practices. Mathematics coaches can address teachers' concerns through sustained,…
NASA Astrophysics Data System (ADS)
Hidayat, D.; Nurlaelah, E.; Dahlan, J. A.
2017-09-01
The ability of mathematical creative and critical thinking are two abilities that need to be developed in the learning of mathematics. Therefore, efforts need to be made in the design of learning that is capable of developing both capabilities. The purpose of this research is to examine the mathematical creative and critical thinking ability of students who get rigorous mathematical thinking (RMT) approach and students who get expository approach. This research was quasi experiment with control group pretest-posttest design. The population were all of students grade 11th in one of the senior high school in Bandung. The result showed that: the achievement of mathematical creative and critical thinking abilities of student who obtain RMT is better than students who obtain expository approach. The use of Psychological tools and mediation with criteria of intentionality, reciprocity, and mediated of meaning on RMT helps students in developing condition in critical and creative processes. This achievement contributes to the development of integrated learning design on students’ critical and creative thinking processes.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
From Faddeev-Kulish to LSZ. Towards a non-perturbative description of colliding electrons
NASA Astrophysics Data System (ADS)
Dybalski, Wojciech
2017-12-01
In a low energy approximation of the massless Yukawa theory (Nelson model) we derive a Faddeev-Kulish type formula for the scattering matrix of N electrons and reformulate it in LSZ terms. To this end, we perform a decomposition of the infrared finite Dollard modifier into clouds of real and virtual photons, whose infrared divergencies mutually cancel. We point out that in the original work of Faddeev and Kulish the clouds of real photons are omitted, and consequently their wave-operators are ill-defined on the Fock space of free electrons. To support our observations, we compare our final LSZ expression for N = 1 with a rigorous non-perturbative construction due to Pizzo. While our discussion contains some heuristic steps, they can be formulated as clear-cut mathematical conjectures.
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Thermochemical nonequilibrium in atomic hydrogen at elevated temperatures
NASA Technical Reports Server (NTRS)
Scott, R. K.
1972-01-01
A numerical study of the nonequilibrium flow of atomic hydrogen in a cascade arc was performed to obtain insight into the physics of the hydrogen cascade arc. A rigorous mathematical model of the flow problem was formulated, incorporating the important nonequilibrium transport phenomena and atomic processes which occur in atomic hydrogen. Realistic boundary conditions, including consideration of the wall electrostatic sheath phenomenon, were included in the model. The governing equations of the asymptotic region of the cascade arc were obtained by writing conservation of mass and energy equations for the electron subgas, an energy conservation equation for heavy particles and an equation of state. Finite-difference operators for variable grid spacing were applied to the governing equations and the resulting system of strongly coupled, stiff equations were solved numerically by the Newton-Raphson method.
On Stable Marriages and Greedy Matchings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manne, Fredrik; Naim, Md; Lerring, Hakon
2016-12-11
Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedymore » matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.« less
Seismic waves in a self-gravitating planet
NASA Astrophysics Data System (ADS)
Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther
2013-04-01
The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.
On the characterization of the heterogeneous mechanical response of human brain tissue.
Forte, Antonio E; Gentleman, Stephen M; Dini, Daniele
2017-06-01
The mechanical characterization of brain tissue is a complex task that scientists have tried to accomplish for over 50 years. The results in the literature often differ by orders of magnitude because of the lack of a standard testing protocol. Different testing conditions (including humidity, temperature, strain rate), the methodology adopted, and the variety of the species analysed are all potential sources of discrepancies in the measurements. In this work, we present a rigorous experimental investigation on the mechanical properties of human brain, covering both grey and white matter. The influence of testing conditions is also shown and thoroughly discussed. The material characterization performed is finally adopted to provide inputs to a mathematical formulation suitable for numerical simulations of brain deformation during surgical procedures.
Overview of Aro Program on Network Science for Human Decision Making
NASA Astrophysics Data System (ADS)
West, Bruce J.
This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.
ERIC Educational Resources Information Center
Jackson, Christa; Jong, Cindy
2017-01-01
Teaching mathematics for equity is critical because it provides opportunities for all students, especially those who have been traditionally marginalised, to learn mathematics that is rigorous and relevant to their lives. This article reports on our work, as mathematics teacher educators, on exposing and engaging 60 elementary preservice teachers…
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
A Mathematical Evaluation of the Core Conductor Model
Clark, John; Plonsey, Robert
1966-01-01
This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
The impact of rigorous mathematical thinking as learning method toward geometry understanding
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-05-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding. RMT is a unique realization of the cognitive conceptual construction approach based on Mediated Learning Experience (MLE) theory by Feurstein and Vygotsky’s sociocultural theory. This was quasi experimental research which was comparing the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning method toward conceptual understanding of Junior High School students. The data was analyzed by using Independent t-test and obtained a significant difference of mean value between experimental and control class on geometry conceptual understanding. Further, by semi-structure interview known that students taught by RMT had deeper conceptual understanding than students who were taught by conventional way. By these result known that Rigorous Mathematical Thinking (RMT) as learning method have positive impact toward Geometry conceptual understanding.
Secondary School Advanced Mathematics, Chapter 3, Formal Geometry. Student's Text.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This text is the second of five in the Secondary School Advanced Mathematics (SSAM) series which was designed to meet the needs of students who have completed the Secondary School Mathematics (SSM) program, and wish to continue their study of mathematics. This volume is devoted to a rigorous development of theorems in plane geometry from 22…
ERIC Educational Resources Information Center
Chard, David J.; Baker, Scott K.; Clarke, Ben; Jungjohann, Kathleen; Davis, Karen; Smolkowski, Keith
2008-01-01
Concern about poor mathematics achievement in U.S. schools has increased in recent years. In part, poor achievement may be attributed to a lack of attention to early instruction and missed opportunities to build on young children's early understanding of mathematics. This study examined the development and feasibility testing of a kindergarten…
ERIC Educational Resources Information Center
Gersten, Russell
2016-01-01
In this commentary, the author reflects on four studies that have greatly expanded the knowledge base on effective interventions in mathematics, and he provides four rigorous experimental studies of approaches for students likely to experience difficulties learning mathematics over a large grade-level span (pre-K to 4th grade). All of the…
ERIC Educational Resources Information Center
Seeley, Cathy
2004-01-01
This article addresses some important issues in mathematics instruction at the middle and secondary levels, including the structuring of a district's mathematics program; the choice of textbooks and use of calculators in the classroom; the need for more rigorous lesson planning practices; and the dangers of teaching to standardized tests rather…
Advanced Mathematical Thinking
ERIC Educational Resources Information Center
Dubinsky, Ed; McDonald, Michael A.; Edwards, Barbara S.
2005-01-01
In this article we propose the following definition for advanced mathematical thinking: Thinking that requires deductive and rigorous reasoning about mathematical notions that are not entirely accessible to us through our five senses. We argue that this definition is not necessarily tied to a particular kind of educational experience; nor is it…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Sanat K.; Das, Prasanta K., E-mail: pkd@mech.iitkgp.ernet.in; Maiti, Biswajit
2015-03-14
A rigorous thermodynamic formulation of the geometric model for heterogeneous nucleation including line tension effect is missing till date due to the associated mathematical hurdles. In this work, we develop a novel thermodynamic formulation based on Classical Nucleation Theory (CNT), which is supposed to illustrate a systematic and a more plausible analysis for the heterogeneous nucleation on a planar surface including the line tension effect. The appreciable range of the critical microscopic contact angle (θ{sub c}), obtained from the generalized Young’s equation and the stability analysis, is θ{sub ∞} < θ{sub c} < θ′ for positive line tension and ismore » θ{sub M} < θ{sub c} < θ{sub ∞} for negative line tension. θ{sub ∞} is the macroscopic contact angle, θ′ is the contact angle for which the Helmholtz free energy has the minimum value for the positive line tension, and θ{sub M} is the local minima of the nondimensional line tension effect for the negative line tension. The shape factor f, which is basically the dimensionless critical free energy barrier, becomes higher for lower values of θ{sub ∞} and higher values of θ{sub c} for positive line tension. The combined effect due to the presence of the triple line and the interfacial areas (f{sup L} + f{sup S}) in shape factor is always within (0, 3.2), resulting f in the range of (0, 1.7) for positive line tension. A formerly presumed appreciable range for θ{sub c}(0 < θ{sub c} < θ{sub ∞}) is found not to be true when the effect of negative line tension is considered for CNT. Estimation based on the property values of some real fluids confirms the relevance of the present analysis.« less
Rigorous Science: a How-To Guide
Fang, Ferric C.
2016-01-01
ABSTRACT Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. PMID:27834205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oryu, S.; Nishinohara, S.; Sonoda, K.
The three-charged-particle Faddeev-type equations for a full potential system are presented in momentum space. The potential is composed of a short range two-body, nuclear potential and a three-body-force potential plus the long range Coulomb potential. A novel framework is proposed for this purpose which contains two innovations aimed at realizing a breakthrough for the notoriously troublesome long range behavior of charged particle systems and tedious Coulomb prescriptions in momentum space calculations. One involves introduction of a Coulomb boundary condition and the other is a new definition of the Coulomb amplitude using two-potential theory for VC = VR + V{phi} withmore » respect to a screened Coulomb potential VR and the remainder V{phi} = VC - VR. Some important equations, which are underlined in our approach, are mathematically proved. The formulation is not only rigorous but also useful for numerical calculations.« less
NASA Astrophysics Data System (ADS)
Qian, Hong; Kjelstrup, Signe; Kolomeisky, Anatoly B.; Bedeaux, Dick
2016-04-01
Nonequilibrium thermodynamics (NET) investigates processes in systems out of global equilibrium. On a mesoscopic level, it provides a statistical dynamic description of various complex phenomena such as chemical reactions, ion transport, diffusion, thermochemical, thermomechanical and mechanochemical fluxes. In the present review, we introduce a mesoscopic stochastic formulation of NET by analyzing entropy production in several simple examples. The fundamental role of nonequilibrium steady-state cycle kinetics is emphasized. The statistical mechanics of Onsager’s reciprocal relations in this context is elucidated. Chemomechanical, thermomechanical, and enzyme-catalyzed thermochemical energy transduction processes are discussed. It is argued that mesoscopic stochastic NET in phase space provides a rigorous mathematical basis of fundamental concepts needed for understanding complex processes in chemistry, physics and biology. This theory is also relevant for nanoscale technological advances.
On the relation between phase-field crack approximation and gradient damage modelling
NASA Astrophysics Data System (ADS)
Steinke, Christian; Zreid, Imadeddin; Kaliske, Michael
2017-05-01
The finite element implementation of a gradient enhanced microplane damage model is compared to a phase-field model for brittle fracture. Phase-field models and implicit gradient damage models share many similarities despite being conceived from very different standpoints. In both approaches, an additional differential equation and a length scale are introduced. However, while the phase-field method is formulated starting from the description of a crack in fracture mechanics, the gradient method starts from a continuum mechanics point of view. At first, the scope of application for both models is discussed to point out intersections. Then, the analysis of the employed mathematical methods and their rigorous comparison are presented. Finally, numerical examples are introduced to illustrate the findings of the comparison which are summarized in a conclusion at the end of the paper.
Teaching Mathematics to Civil Engineers
ERIC Educational Resources Information Center
Sharp, J. J.; Moore, E.
1977-01-01
This paper outlines a technique for teaching a rigorous course in calculus and differential equations which stresses applicability of the mathematics to problems in civil engineering. The method involves integration of subject matter and team teaching. (SD)
Multiplicative Multitask Feature Learning
Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu
2016-01-01
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735
ERIC Educational Resources Information Center
Cobbs, Joyce Bernice
2014-01-01
The literature on minority student achievement indicates that Black students are underrepresented in advanced mathematics courses. Advanced mathematics courses offer students the opportunity to engage with challenging curricula, experience rigorous instruction, and interact with quality teachers. The middle school years are particularly…
Community College Pathways: A Descriptive Report of Summative Assessments and Student Learning
ERIC Educational Resources Information Center
Strother, Scott; Sowers, Nicole
2014-01-01
Carnegie's Community College Pathways (CCP) offers two pathways, Statway® and Quantway®, that reduce the amount of time required to complete developmental mathematics and earn college-level mathematics credit. The Pathways aim to improve student success in mathematics while maintaining rigorous content, pedagogy, and learning outcomes. It is…
Teacher Efficacy of High School Mathematics Co-Teachers
ERIC Educational Resources Information Center
Rimpola, Raquel C.
2011-01-01
High school mathematics inclusion classes help provide all students the access to rigorous curriculum. This study provides information about the teacher efficacy of high school mathematics co-teachers. It considers the influence of the amount of collaborative planning time on the efficacy of co-teachers. A quantitative research design was used,…
Mathematical Rigor in the Common Core
ERIC Educational Resources Information Center
Hull, Ted H.; Balka, Don S.; Miles, Ruth Harbin
2013-01-01
A whirlwind of activity surrounds the topic of teaching and learning mathematics. The driving forces are a combination of changes in assessment and advances in technology that are being spurred on by the introduction of content in the Common Core State Standards for Mathematical Practice. Although the issues are certainly complex, the same forces…
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
Mathematical Modeling of Diverse Phenomena
NASA Technical Reports Server (NTRS)
Howard, J. C.
1979-01-01
Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.
NASA Technical Reports Server (NTRS)
Goorevich, C. E.
1975-01-01
The mathematical formulation is presented of CNTRLF, the maneuver control program for the Applications Technology Satellite-F (ATS-F). The purpose is to specify the mathematical models that are included in the design of CNTRLF.
On decentralized design: Rationale, dynamics, and effects on decision-making
NASA Astrophysics Data System (ADS)
Chanron, Vincent
The focus of this dissertation is the design of complex systems, including engineering systems such as cars, airplanes, and satellites. Companies who design these systems are under constant pressure to design better products that meet customer expectations, and competition forces them to develop them faster. One of the responses of the industry to these conflicting challenges has been the decentralization of the design responsibilities. The current lack of understanding of the dynamics of decentralized design processes is the main motivation for this research, and places value on the descriptive base. It identifies the main reasons and the true benefits for companies to decentralize the design of their products. It also demonstrates the limitations of this approach by listing the relevant issues and problems created by the decentralization of decisions. Based on these observations, a game-theoretic approach to decentralized design is proposed to model the decisions made during the design process. The dynamics are modeled using mathematical formulations inspired from control theory. Building upon this formalism, the issue of convergence in decentralized design is analyzed: the equilibrium points of the design space are identified and convergent and divergent patterns are recognized. This rigorous investigation of the design process provides motivation and support for proposing new approaches to decentralized design problems. Two methods are developed, which aim at improving the design process in two ways: decreasing the product development time, and increasing the optimality of the final design. The frame of these methods are inspired by eigenstructure decomposition and set-based design, respectively. The value of the research detailed within this dissertation is in the proposed methods which are built upon the sound mathematical formalism developed. The contribution of this work is two fold: rigorous investigation of the design process, and practical support to decision-making in decentralized environments.
Methodological Developments in Geophysical Assimilation Modeling
NASA Astrophysics Data System (ADS)
Christakos, George
2005-06-01
This work presents recent methodological developments in geophysical assimilation research. We revisit the meaning of the term "solution" of a mathematical model representing a geophysical system, and we examine its operational formulations. We argue that an assimilation solution based on epistemic cognition (which assumes that the model describes incomplete knowledge about nature and focuses on conceptual mechanisms of scientific thinking) could lead to more realistic representations of the geophysical situation than a conventional ontologic assimilation solution (which assumes that the model describes nature as is and focuses on form manipulations). Conceptually, the two approaches are fundamentally different. Unlike the reasoning structure of conventional assimilation modeling that is based mainly on ad hoc technical schemes, the epistemic cognition approach is based on teleologic criteria and stochastic adaptation principles. In this way some key ideas are introduced that could open new areas of geophysical assimilation to detailed understanding in an integrated manner. A knowledge synthesis framework can provide the rational means for assimilating a variety of knowledge bases (general and site specific) that are relevant to the geophysical system of interest. Epistemic cognition-based assimilation techniques can produce a realistic representation of the geophysical system, provide a rigorous assessment of the uncertainty sources, and generate informative predictions across space-time. The mathematics of epistemic assimilation involves a powerful and versatile spatiotemporal random field theory that imposes no restriction on the shape of the probability distributions or the form of the predictors (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated) and accounts rigorously for the uncertainty features of the geophysical system. In the epistemic cognition context the assimilation concept may be used to investigate critical issues related to knowledge reliability, such as uncertainty due to model structure error (conceptual uncertainty).
Stochastic Geometry and Quantum Gravity: Some Rigorous Results
NASA Astrophysics Data System (ADS)
Zessin, H.
The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.
STEM Pathways: Examining Persistence in Rigorous Math and Science Course Taking
NASA Astrophysics Data System (ADS)
Ashford, Shetay N.; Lanehart, Rheta E.; Kersaint, Gladis K.; Lee, Reginald S.; Kromrey, Jeffrey D.
2016-12-01
From 2006 to 2012, Florida Statute §1003.4156 required middle school students to complete electronic personal education planners (ePEPs) before promotion to ninth grade. The ePEP helped them identify programs of study and required high school coursework to accomplish their postsecondary education and career goals. During the same period Florida required completion of the ePEP, Florida's Career and Professional Education Act stimulated a rapid increase in the number of statewide high school career academies. Students with interests in STEM careers created STEM-focused ePEPs and may have enrolled in STEM career academies, which offered a unique opportunity to improve their preparedness for the STEM workforce through the integration of rigorous academic and career and technical education courses. This study examined persistence of STEM-interested (i.e., those with expressed interest in STEM careers) and STEM-capable (i.e., those who completed at least Algebra 1 in eighth grade) students ( n = 11,248), including those enrolled in STEM career academies, in rigorous mathematics and science course taking in Florida public high schools in comparison with the national cohort of STEM-interested students to measure the influence of K-12 STEM education efforts in Florida. With the exception of multi-race students, we found that Florida's STEM-capable students had lower persistence in rigorous mathematics and science course taking than students in the national cohort from ninth to eleventh grade. We also found that participation in STEM career academies did not support persistence in rigorous mathematics and science courses, a prerequisite for success in postsecondary STEM education and careers.
NASA Astrophysics Data System (ADS)
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
Mathematical Models for Controlled Drug Release Through pH-Responsive Polymeric Hydrogels.
Manga, Ramya D; Jha, Prateek K
2017-02-01
Hydrogels consisting of weakly charged acidic/basic groups are ideal candidates for carriers in oral delivery, as they swell in response to pH changes in the gastrointestinal tract, resulting in drug entrapment at low pH conditions of the stomach and drug release at high pH conditions of the intestine. We have developed 1-dimensional mathematical models to study the drug release behavior through pH-responsive hydrogels. Models are developed for 3 different cases that vary in the level of rigor, which together can be applied to predict both in vitro (drug release from carrier) and in vivo (drug concentration in the plasma) behavior of hydrogel-drug formulations. A detailed study of the effect of hydrogel and drug characteristics and physiological conditions is performed to gain a fundamental insight into the drug release behavior, which may be useful in the design of pH-responsive drug carriers. Finally, we describe a successful application of these models to predict both in vitro and in vivo behavior of docetaxel-loaded micelle in a pH-responsive hydrogel, as reported in a recent experimental study. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Accuracy and performance of 3D mask models in optical projection lithography
NASA Astrophysics Data System (ADS)
Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar
2011-04-01
Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.
A Rigorous Treatment of Energy Extraction from a Rotating Black Hole
NASA Astrophysics Data System (ADS)
Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.
2009-05-01
The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.
ERIC Educational Resources Information Center
Jitendra, Asha K.; Petersen-Brown, Shawna; Lein, Amy E.; Zaslofsky, Anne F.; Kunkel, Amy K.; Jung, Pyung-Gang; Egan, Andrea M.
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et…
ERIC Educational Resources Information Center
Jehopio, Peter J.; Wesonga, Ronald
2017-01-01
Background: The main objective of the study was to examine the relevance of engineering mathematics to the emerging industries. The level of abstraction, the standard of rigor, and the depth of theoretical treatment are necessary skills expected of a graduate engineering technician to be derived from mathematical knowledge. The question of whether…
Linking Literacy and Mathematics: The Support for Common Core Standards for Mathematical Practice
ERIC Educational Resources Information Center
Swanson, Mary; Parrott, Martha
2013-01-01
In a new era of Common Core State Standards (CCSS), teachers are expected to provide more rigorous, coherent, and focused curriculum at every grade level. To respond to the call for higher expectations across the curriculum and certainly within reading, writing, and mathematics, educators should work closely together to create mathematically…
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
An Informal History of Formal Proofs: From Vigor to Rigor?
ERIC Educational Resources Information Center
Galda, Klaus
1981-01-01
The history of formal mathematical proofs is sketched out, starting with the Greeks. Included in this document is a chronological guide to mathematics and the world, highlighting major events in the world and important mathematicians in corresponding times. (MP)
Fadıloğlu, Eylem Ezgi; Serdaroğlu, Meltem
2018-01-01
Abstract This study was conducted to evaluate the effects of pre and post-rigor marinade injections on some quality parameters of Longissimus dorsi (LD) muscles. Three marinade formulations were prepared with 2% NaCl, 2% NaCl+0.5 M lactic acid and 2% NaCl+0.5 M sodium lactate. In this study marinade uptake, pH, free water, cooking loss, drip loss and color properties were analyzed. Injection time had significant effect on marinade uptake levels of samples. Regardless of marinate formulation, marinade uptake of pre-rigor samples injected with marinade solutions were higher than post rigor samples. Injection of sodium lactate increased pH values of samples whereas lactic acid injection decreased pH. Marinade treatment and storage period had significant effect on cooking loss. At each evaluation period interaction between marinade treatment and injection time showed different effect on free water content. Storage period and marinade application had significant effect on drip loss values. Drip loss in all samples increased during the storage. During all storage days, lowest CIE L* value was found in pre-rigor samples injected with sodium lactate. Lactic acid injection caused color fade in pre-rigor and post-rigor samples. Interaction between marinade treatment and storage period was found statistically significant (p<0.05). At day 0 and 3, the lowest CIE b* values obtained pre-rigor samples injected with sodium lactate and there were no differences were found in other samples. At day 6, no significant differences were found in CIE b* values of all samples. PMID:29805282
Fadıloğlu, Eylem Ezgi; Serdaroğlu, Meltem
2018-04-01
This study was conducted to evaluate the effects of pre and post-rigor marinade injections on some quality parameters of Longissimus dorsi (LD) muscles. Three marinade formulations were prepared with 2% NaCl, 2% NaCl+0.5 M lactic acid and 2% NaCl+0.5 M sodium lactate. In this study marinade uptake, pH, free water, cooking loss, drip loss and color properties were analyzed. Injection time had significant effect on marinade uptake levels of samples. Regardless of marinate formulation, marinade uptake of pre-rigor samples injected with marinade solutions were higher than post rigor samples. Injection of sodium lactate increased pH values of samples whereas lactic acid injection decreased pH. Marinade treatment and storage period had significant effect on cooking loss. At each evaluation period interaction between marinade treatment and injection time showed different effect on free water content. Storage period and marinade application had significant effect on drip loss values. Drip loss in all samples increased during the storage. During all storage days, lowest CIE L* value was found in pre-rigor samples injected with sodium lactate. Lactic acid injection caused color fade in pre-rigor and post-rigor samples. Interaction between marinade treatment and storage period was found statistically significant ( p <0.05). At day 0 and 3, the lowest CIE b* values obtained pre-rigor samples injected with sodium lactate and there were no differences were found in other samples. At day 6, no significant differences were found in CIE b* values of all samples.
Surface conservation laws at microscopically diffuse interfaces.
Chu, Kevin T; Bazant, Martin Z
2007-11-01
In studies of interfaces with dynamic chemical composition, bulk and interfacial quantities are often coupled via surface conservation laws of excess surface quantities. While this approach is easily justified for microscopically sharp interfaces, its applicability in the context of microscopically diffuse interfaces is less theoretically well-established. Furthermore, surface conservation laws (and interfacial models in general) are often derived phenomenologically rather than systematically. In this article, we first provide a mathematically rigorous justification for surface conservation laws at diffuse interfaces based on an asymptotic analysis of transport processes in the boundary layer and derive general formulae for the surface and normal fluxes that appear in surface conservation laws. Next, we use nonequilibrium thermodynamics to formulate surface conservation laws in terms of chemical potentials and provide a method for systematically deriving the structure of the interfacial layer. Finally, we derive surface conservation laws for a few examples from diffusive and electrochemical transport.
Voit, Eberhard O
2009-01-01
Modern advances in molecular biology have produced enormous amounts of data characterizing physiological and disease states in cells and organisms. While bioinformatics has facilitated the organizing and mining of these data, it is the task of systems biology to merge the available information into dynamic, explanatory and predictive models. This article takes a step into this direction. It proposes a conceptual approach toward formalizing health and disease and illustrates it in the context of inflammation and preconditioning. Instead of defining health and disease states, the emphasis is on simplexes in a high-dimensional biomarker space. These simplexes are bounded by physiological constraints and permit the quantitative characterization of personalized health trajectories, health risk profiles that change with age, and the efficacy of different treatment options. The article mainly focuses on concepts but also briefly describes how the proposed concepts might be formulated rigorously within a mathematical framework.
Brian Barry: innovative contributions to transdermal and topical drug delivery.
Williams, A C
2013-01-01
Brian Barry published over 300 research articles across topics ranging from colloid science, vasoconstriction and the importance of thermodynamics in dermal drug delivery to exploring the structure and organisation of the stratum corneum barrier lipids and numerous strategies for improving topical and transdermal drug delivery, including penetration enhancers, supersaturation, coacervation, eutectic formation and the use of varied liposomes. As research in the area blossomed in the early 1980s, Brian wrote the book that became essential reading for both new and established dermal delivery scientists, explaining the background mathematics and principles through to formulation design. Brian also worked with numerous scientists, as collaborators and students, who have themselves taken his rigorous approach to scientific investigation into their own research groups. This paper can only describe a small fraction of the many significant contributions that Brian made to the field during his 40-year academic career.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Dividing by Zero: Exploring Null Results in a Mathematics Professional Development Program
ERIC Educational Resources Information Center
Hill, Heather C.; Corey, Douglas Lyman; Jacob, Robin T.
2018-01-01
Background/Context: Since 2002, U.S. federal funding for educational research has favored the development and rigorous testing of interventions designed to improve student outcomes. However, recent reviews suggest that a large fraction of the programs developed and rigorously tested in the past decade have shown null results on student outcomes…
Underprepared Students' Performance on Algebra in a Double-Period High School Mathematics Program
ERIC Educational Resources Information Center
Martinez, Mara V.; Bragelman, John; Stoelinga, Timothy
2016-01-01
The primary goal of the Intensified Algebra I (IA) program is to enable mathematically underprepared students to successfully complete Algebra I in 9th grade and stay on track to meet increasingly rigorous high school mathematics graduation requirements. The program was designed to bring a range of both cognitive and non-cognitive supports to bear…
Investigation of possible observable e ects in a proposed theory of physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freidan, Daniel
2015-03-31
The work supported by this grant produced rigorous mathematical results on what is possible in quantum field theory. Quantum field theory is the well-established mathematical language for fundamental particle physics, for critical phenomena in condensed matter physics, and for Physical Mathematics (the numerous branches of Mathematics that have benefitted from ideas, constructions, and conjectures imported from Theoretical Physics). Proving rigorous constraints on what is possible in quantum field theories thus guides the field, puts actual constraints on what is physically possible in physical or mathematical systems described by quantum field theories, and saves the community the effort of trying tomore » do what is proved impossible. Results were obtained in two dimensional qft (describing, e.g., quantum circuits) and in higher dimensional qft. Rigorous bounds were derived on basic quantities in 2d conformal field theories, i.e., in 2d critical phenomena. Conformal field theories are the basic objects in quantum field theory, the scale invariant theories describing renormalization group fixed points from which all qfts flow. The first known lower bounds on the 2d boundary entropy were found. This is the entropy- information content- in junctions in critical quantum circuits. For dimensions d > 2, a no-go theorem was proved on the possibilities of Cauchy fields, which are the analogs of the holomorphic fields in d = 2 dimensions, which have had enormously useful applications in Physics and Mathematics over the last four decades. This closed o the possibility of finding analogously rich theories in dimensions above 2. The work of two postdoctoral research fellows was partially supported by this grant. Both have gone on to tenure track positions.« less
NASA Technical Reports Server (NTRS)
Tanveer, S.; Foster, M. R.
2002-01-01
We report progress in three areas of investigation related to dendritic crystal growth. Those items include: 1. Selection of tip features dendritic crystal growth; 2) Investigation of nonlinear evolution for two-sided model; and 3) Rigorous mathematical justification.
NASA Astrophysics Data System (ADS)
Blanchard, Philippe; Hellmich, Mario; Ługiewicz, Piotr; Olkiewicz, Robert
Quantum mechanics is the greatest revision of our conception of the character of the physical world since Newton. Consequently, David Hilbert was very interested in quantum mechanics. He and John von Neumann discussed it frequently during von Neumann's residence in Göttingen. He published in 1932 his book Mathematical Foundations of Quantum Mechanics. In Hilbert's opinion it was the first exposition of quantum mechanics in a mathematically rigorous way. The pioneers of quantum mechanics, Heisenberg and Dirac, neither had use for rigorous mathematics nor much interest in it. Conceptually, quantum theory as developed by Bohr and Heisenberg is based on the positivism of Mach as it describes only observable quantities. It first emerged as a result of experimental data in the form of statistical observations of quantum noise, the basic concept of quantum probability.
2010-10-01
Mathematics , Indiana University Northwest, Gary, IN 3Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, NY 4H...however, is mathematically more parsimonious. The original DCA formulation required several mathematical manipulations making the simplicity of regret...into treatment administration examples; IH developed the mathematical formulation of the model; AV is the author of DCA; BD proposed the regret theory
Effective Field Theory on Manifolds with Boundary
NASA Astrophysics Data System (ADS)
Albert, Benjamin I.
In the monograph Renormalization and Effective Field Theory, Costello made two major advances in rigorous quantum field theory. Firstly, he gave an inductive position space renormalization procedure for constructing an effective field theory that is based on heat kernel regularization of the propagator. Secondly, he gave a rigorous formulation of quantum gauge theory within effective field theory that makes use of the BV formalism. In this work, we extend Costello's renormalization procedure to a class of manifolds with boundary and make preliminary steps towards extending his formulation of gauge theory to manifolds with boundary. In addition, we reorganize the presentation of the preexisting material, filling in details and strengthening the results.
The Menu for Every Young Mathematician's Appetite
ERIC Educational Resources Information Center
Legnard, Danielle S.; Austin, Susan L.
2012-01-01
Math Workshop offers differentiated instruction to foster a deep understanding of rich, rigorous mathematics that is attainable by all learners. The inquiry-based model provides a menu of multilevel math tasks, within the daily math block, that focus on similar mathematical content. Math Workshop promotes a culture of engagement and…
Math Interventions for Students with Autism Spectrum Disorder: A Best-Evidence Synthesis
ERIC Educational Resources Information Center
King, Seth A.; Lemons, Christopher J.; Davidson, Kimberly A.
2016-01-01
Educators need evidence-based practices to assist students with disabilities in meeting increasingly rigorous standards in mathematics. Students with autism spectrum disorder (ASD) are increasingly expected to demonstrate learning of basic and advanced mathematical concepts. This review identifies math intervention studies involving children and…
Control Engineering, System Theory and Mathematics: The Teacher's Challenge
ERIC Educational Resources Information Center
Zenger, K.
2007-01-01
The principles, difficulties and challenges in control education are discussed and compared to the similar problems in the teaching of mathematics and systems science in general. The difficulties of today's students to appreciate the classical teaching of engineering disciplines, which are based on rigorous and scientifically sound grounds, are…
A Qualitative Approach to Enzyme Inhibition
ERIC Educational Resources Information Center
Waldrop, Grover L.
2009-01-01
Most general biochemistry textbooks present enzyme inhibition by showing how the basic Michaelis-Menten parameters K[subscript m] and V[subscript max] are affected mathematically by a particular type of inhibitor. This approach, while mathematically rigorous, does not lend itself to understanding how inhibition patterns are used to determine the…
ERIC Educational Resources Information Center
Dempsey, Michael
2009-01-01
If students are in an advanced mathematics class, then at some point they enjoyed mathematics and looked forward to learning and practicing it. There is no reason that this passion and enjoyment should ever be lost because the subject becomes more difficult or rigorous. This author, who teaches advanced precalculus to high school juniors,…
Handbook of applied mathematics for engineers and scientists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, M.
1991-12-31
This book is intended to be reference for applications of mathematics in a wide range of topics of interest to engineers and scientists. An unusual feature of this book is that it covers a large number of topics from elementary algebra, trigonometry, and calculus to computer graphics and cybernetics. The level of mathematics covers high school through about the junior level of an engineering curriculum in a major univeristy. Throughout, the emphasis is on applications of mathematics rather than on rigorous proofs.
Student’s rigorous mathematical thinking based on cognitive style
NASA Astrophysics Data System (ADS)
Fitriyani, H.; Khasanah, U.
2017-12-01
The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.
NASA Astrophysics Data System (ADS)
Hamid, H.
2018-01-01
The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.
Crazing in Polymeric and Composite Systems
1990-04-23
these physical variations into consideration in any mathematical modeling and formulation in analyzing the stresses from the time when crazes incept to...as boundary tractions with great strength; any governing mathematical formulation must include this feature for any adequate analysis. Crazes of...constants the mathematical model describing the crazing mechanism have been successful [25-29]. References 1 J. A. Sauer, J. Marin and C. C. Hsiao, J. App
Steady-state and dynamic models for particle engulfment during solidification
NASA Astrophysics Data System (ADS)
Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.
2016-06-01
Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.
NASA Astrophysics Data System (ADS)
Sarkar, Biplab; Adhikari, Satrajit
If a coupled three-state electronic manifold forms a sub-Hilbert space, it is possible to express the non-adiabatic coupling (NAC) elements in terms of adiabatic-diabatic transformation (ADT) angles. Consequently, we demonstrate: (a) Those explicit forms of the NAC terms satisfy the Curl conditions with non-zero Divergences; (b) The formulation of extended Born-Oppenheimer (EBO) equation for any three-state BO system is possible only when there exists coordinate independent ratio of the gradients for each pair of ADT angles leading to zero Curls at and around the conical intersection(s). With these analytic advancements, we formulate a rigorous EBO equation and explore its validity as well as necessity with respect to the approximate one (Sarkar and Adhikari, J Chem Phys 2006, 124, 074101) by performing numerical calculations on two different models constructed with different chosen forms of the NAC elements.
¡Enséname! Teaching Each Other to Reason through Math in the Second Grade
ERIC Educational Resources Information Center
Schmitz, Lindsey
2016-01-01
This action research sought to evaluate the effect of peer teaching structures across subgroups of students differentiated by language and mathematical skill ability. These structures were implemented in an effort to maintain mathematical rigor while building my students' academic language capacity. More specifically, the study investigated peer…
ERIC Educational Resources Information Center
Camacho, Erika T.; Holmes, Raquell M.; Wirkus, Stephen A.
2015-01-01
This chapter describes how sustained mentoring together with rigorous collaborative learning and community building contributed to successful mathematical research and individual growth in the Applied Mathematical Sciences Summer Institute (AMSSI), a program that focused on women, underrepresented minorities, and individuals from small teaching…
Water Bottle Designs and Measures
ERIC Educational Resources Information Center
Carmody, Heather Gramberg
2010-01-01
The increase in the diversity of students and the complexity of their needs can be a rich addition to a mathematics classroom. The challenge for teachers is to find a way to include students' interests and creativity in a way that allows for rigorous mathematics. One method of incorporating the diversity is the development of "open-ended…
Time-ordered exponential on the complex plane and Gell-Mann—Low formula as a mathematical theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futakuchi, Shinichiro; Usui, Kouta
2016-04-15
The time-ordered exponential representation of a complex time evolution operator in the interaction picture is studied. Using the complex time evolution, we prove the Gell-Mann—Low formula under certain abstract conditions, in mathematically rigorous manner. We apply the abstract results to quantum electrodynamics with cutoffs.
Science and Mathematics Advanced Placement Exams: Growth and Achievement over Time
ERIC Educational Resources Information Center
Judson, Eugene
2017-01-01
Rapid growth of Advanced Placement (AP) exams in the last 2 decades has been paralleled by national enthusiasm to promote availability and rigor of science, technology, engineering, and mathematics (STEM). Trends were examined in STEM AP to evaluate and compare growth and achievement. Analysis included individual STEM subjects and disaggregation…
Rubin, Jacob
1983-01-01
Examples involving six broad reaction classes show that the nature of transport-affecting chemistry may have a profound effect on the mathematical character of solute transport problem formulation. Substantive mathematical diversity among such formulations is brought about principally by reaction properties that determine whether (1) the reaction can be regarded as being controlled by local chemical equilibria or whether it must be considered as being controlled by kinetics, (2) the reaction is homogeneous or heterogeneous, (3) the reaction is a surface reaction (adsorption, ion exchange) or one of the reactions of classical chemistry (e.g., precipitation, dissolution, oxidation, reduction, complex formation). These properties, as well as the choice of means to describe them, stipulate, for instance, (1) the type of chemical entities for which a formulation's basic, mass-balance equations should be written; (2) the nature of mathematical transformations needed to change the problem's basic equations into operational ones. These and other influences determine such mathematical features of problem formulations as the nature of the operational transport-equation system (e.g., whether it involves algebraic, partial-differential, or integro-partial-differential simultaneous equations), the type of nonlinearities of such a system, and the character of the boundaries (e.g., whether they are stationary or moving). Exploration of the reasons for the dependence of transport mathematics on transport chemistry suggests that many results of this dependence stem from the basic properties of the reactions' chemical-relation (i.e., equilibrium or rate) equations.
Sex Differences in the Response of Children with ADHD to Once-Daily Formulations of Methylphenidate
ERIC Educational Resources Information Center
Sonuga-Barke, J. S.; Coghill, David; Markowitz, John S.; Swanson, James M.; Vandenberghe, Mieke; Hatch, Simon J.
2007-01-01
Objectives: Studies of sex differences in methylphenidate response by children with attention-deficit/hyperactivity disorder have lacked methodological rigor and statistical power. This paper reports an examination of sex differences based on further analysis of data from a comparison of two once-daily methylphenidate formulations (the COMACS…
Survey of Intermediate Microeconomic Textbooks.
ERIC Educational Resources Information Center
Goulet, Janet C.
1986-01-01
Surveys nine undergraduate microeconomic theory textbooks comprising a representing sample those available. Criteria used were quantity and quality of examples, mathematical rigor, and level of abstraction. (JDH)
A Tool for Rethinking Teachers' Questioning
ERIC Educational Resources Information Center
Simpson, Amber; Mokalled, Stefani; Ellenburg, Lou Ann; Che, S. Megan
2014-01-01
In this article, the authors present a tool, the Cognitive Rigor Matrix (CRM; Hess et al. 2009), as a means to analyze and reflect on the type of questions posed by mathematics teachers. This tool is intended to promote and develop higher-order thinking and inquiry through the use of purposeful questions and mathematical tasks. The authors…
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-12-01
We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.
The Madelung Picture as a Foundation of Geometric Quantum Theory
NASA Astrophysics Data System (ADS)
Reddiger, Maik
2017-10-01
Despite its age, quantum theory still suffers from serious conceptual difficulties. To create clarity, mathematical physicists have been attempting to formulate quantum theory geometrically and to find a rigorous method of quantization, but this has not resolved the problem. In this article we argue that a quantum theory recursing to quantization algorithms is necessarily incomplete. To provide an alternative approach, we show that the Schrödinger equation is a consequence of three partial differential equations governing the time evolution of a given probability density. These equations, discovered by Madelung, naturally ground the Schrödinger theory in Newtonian mechanics and Kolmogorovian probability theory. A variety of far-reaching consequences for the projection postulate, the correspondence principle, the measurement problem, the uncertainty principle, and the modeling of particle creation and annihilation are immediate. We also give a speculative interpretation of the equations following Bohm, Vigier and Tsekov, by claiming that quantum mechanical behavior is possibly caused by gravitational background noise.
Mathematical aspects of finite element methods for incompressible viscous flows
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.
1986-01-01
Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.
A mathematical approach to beam matching
Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N
2013-01-01
Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874
The Construction of Mathematical Literacy Problems for Geometry
NASA Astrophysics Data System (ADS)
Malasari, P. N.; Herman, T.; Jupri, A.
2017-09-01
The students of junior high school should have mathematical literacy ability to formulate, apply, and interpret mathematics in problem solving of daily life. Teaching these students are not enough by giving them ordinary mathematics problems. Teaching activities for these students brings consequence for teacher to construct mathematical literacy problems. Therefore, the aim of this study is to construct mathematical literacy problems to assess mathematical literacy ability. The steps of this study that consists of analysing, designing, theoretical validation, revising, limited testing to students, and evaluating. The data was collected with written test to 38 students of grade IX at one of state junior high school. Mathematical literacy problems consist of three essays with three indicators and three levels at polyhedron subject. The Indicators are formulating and employing mathematics. The results show that: (1) mathematical literacy problems which are constructed have been valid and practical, (2) mathematical literacy problems have good distinguishing characteristics and adequate distinguishing characteristics, (3) difficulty levels of problems are easy and moderate. The final conclusion is mathematical literacy problems which are constructed can be used to assess mathematical literacy ability.
NASA Astrophysics Data System (ADS)
Parumasur, N.; Willie, R.
2008-09-01
We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.
1977-01-01
The theoretical fundamentals and mathematical definitions for calculations involved with dual polarized radio links are given. Detailed derivations and results are discussed for several formulations applied to a general dual polarized radio link.
NASA Astrophysics Data System (ADS)
Pereyra, Nicolas A.
2018-06-01
This book gives a rigorous yet 'physics-focused' introduction to mathematical logic that is geared towards natural science majors. We present the science major with a robust introduction to logic, focusing on the specific knowledge and skills that will unavoidably be needed in calculus topics and natural science topics in general (rather than taking a philosophical-math-fundamental oriented approach that is commonly found in mathematical logic textbooks).
13th Annual Systems Engineering Conference: Tues- Wed
2010-10-28
greater understanding/documentation of lessons learned – Promotes SE within the organization • Justification for continued funding of SE Infrastructure...educational process – Addresses the development of innovative learning tools, strategies, and teacher training • Research and Development – Promotes ...technology, and mathematics • More commitment to engaging young students in science, engineering, technology and mathematics • More rigor in defining
Discrete structures in continuum descriptions of defective crystals
2016-01-01
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of ‘plastic strain variables’, which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. PMID:27002070
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Sworder, Steven C.
2007-01-01
An experimental two-track intermediate algebra course was offered at Saddleback College, Mission Viejo, CA, between the Fall, 2002 and Fall, 2005 semesters. One track was modeled after the existing traditional California community college intermediate algebra course and the other track was a less rigorous intermediate algebra course in which the…
ERIC Educational Resources Information Center
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-01-01
We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking,…
Formulating a stand-growth model for mathematical programming problems in Appalachian forests
Gary W. Miller; Jay Sullivan
1993-01-01
Some growth and yield simulators applicable to central hardwood forests can be formulated for use in mathematical programming models that are designed to optimize multi-stand, multi-resource management problems. Once in the required format, growth equations serve as model constraints, defining the dynamics of stand development brought about by harvesting decisions. In...
Agent-Centric Approach for Cybersecurity Decision-Support with Partial Observability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Chatterjee, Samrat; Paulson, Patrick R.
Generating automated cyber resilience policies for real-world settings is a challenging research problem that must account for uncertainties in system state over time and dynamics between attackers and defenders. In addition to understanding attacker and defender motives and tools, and identifying “relevant” system and attack data, it is also critical to develop rigorous mathematical formulations representing the defender’s decision-support problem under uncertainty. Game-theoretic approaches involving cyber resource allocation optimization with Markov decision processes (MDP) have been previously proposed in the literature. Moreover, advancements in reinforcement learning approaches have motivated the development of partially observable stochastic games (POSGs) in various multi-agentmore » problem domains with partial information. Recent advances in cyber-system state space modeling have also generated interest in potential applicability of POSGs for cybersecurity. However, as is the case in strategic card games such as poker, research challenges using game-theoretic approaches for practical cyber defense applications include: 1) solving for equilibrium and designing efficient algorithms for large-scale, general problems; 2) establishing mathematical guarantees that equilibrium exists; 3) handling possible existence of multiple equilibria; and 4) exploitation of opponent weaknesses. Inspired by advances in solving strategic card games while acknowledging practical challenges associated with the use of game-theoretic approaches in cyber settings, this paper proposes an agent-centric approach for cybersecurity decision-support with partial system state observability.« less
Mekios, Constantinos
2016-04-01
Twentieth-century theoretical efforts towards the articulation of general system properties came short of having the significant impact on biological practice that their proponents envisioned. Although the latter did arrive at preliminary mathematical formulations of such properties, they had little success in showing how these could be productively incorporated into the research agenda of biologists. Consequently, the gap that kept system-theoretic principles cut-off from biological experimentation persisted. More recently, however, simple theoretical tools have proved readily applicable within the context of systems biology. In particular, examples reviewed in this paper suggest that rigorous mathematical expressions of design principles, imported primarily from engineering, could produce experimentally confirmable predictions of the regulatory properties of small biological networks. But this is not enough for contemporary systems biologists who adopt the holistic aspirations of early systemologists, seeking high-level organizing principles that could provide insights into problems of biological complexity at the whole-system level. While the presented evidence is not conclusive about whether this strategy could lead to the realization of the lofty goal of a comprehensive explanatory integration, it suggests that the ongoing quest for organizing principles is pragmatically advantageous for systems biologists. The formalisms postulated in the course of this process can serve as bridges between system-theoretic concepts and the results of molecular experimentation: they constitute theoretical tools for generalizing molecular data, thus producing increasingly accurate explanations of system-wide phenomena.
ERIC Educational Resources Information Center
Mattson, Beverly
2011-01-01
One of the competitive priorities of the U.S. Department of Education's Race to the Top applications addressed science, technology, engineering, and mathematics (STEM). States that applied were required to submit plans that addressed rigorous courses of study, cooperative partnerships to prepare and assist teachers in STEM content, and prepare…
Discrete structures in continuum descriptions of defective crystals.
Parry, G P
2016-04-28
I discuss various mathematical constructions that combine together to provide a natural setting for discrete and continuum geometric models of defective crystals. In particular, I provide a quite general list of 'plastic strain variables', which quantifies inelastic behaviour, and exhibit rigorous connections between discrete and continuous mathematical structures associated with crystalline materials that have a correspondingly general constitutive specification. © 2016 The Author(s).
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.
1978-01-01
A mathematical model package for thermal pollution analyses and prediction is presented. These models, intended as user's manuals, are three dimensional and time dependent using the primitive equation approach. Although they have sufficient generality for application at sites with diverse topographical features; they also present specific instructions regarding data preparation for program execution and sample problems. The mathematical formulation of these models is presented including assumptions, approximations, governing equations, boundary and initial conditions, numerical method of solution, and same results.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
Historical mathematics in the French eighteenth century.
Richards, Joan L
2006-12-01
At least since the seventeenth century, the strange combination of epistemological certainty and ontological power that characterizes mathematics has made it a major focus of philosophical, social, and cultural negotiation. In the eighteenth century, all of these factors were at play as mathematical thinkers struggled to assimilate and extend the analysis they had inherited from the seventeenth century. A combination of educational convictions and historical assumptions supported a humanistic mathematics essentially defined by its flexibility and breadth. This mathematics was an expression of l'esprit humain, which was unfolding in a progressive historical narrative. The French Revolution dramatically altered the historical and educational landscapes that had supported this eighteenth-century approach, and within thirty years Augustin Louis Cauchy had radically reconceptualized and restructured mathematics to be rigorous rather than narrative.
Applied Mathematics in the Undergraduate Curriculum.
ERIC Educational Resources Information Center
Committee on the Undergraduate Program in Mathematics, Berkeley, CA.
After considering the growth in the use of mathematics in the past 25 years, this report makes four major recommendations regarding the undergraduate curriculum: (1) The mathematics department should offer a course or two in applied mathematics which treat some realistic situations completely, including the steps of problem formulation, model…
NASA Astrophysics Data System (ADS)
Jung-Woon Yoo, John
2016-06-01
Since customer preferences change rapidly, there is a need for design processes with shorter product development cycles. Modularization plays a key role in achieving mass customization, which is crucial in today's competitive global market environments. Standardized interfaces among modularized parts have facilitated computational product design. To incorporate product size and weight constraints during computational design procedures, a mixed integer programming formulation is presented in this article. Product size and weight are two of the most important design parameters, as evidenced by recent smart-phone products. This article focuses on the integration of geometric, weight and interface constraints into the proposed mathematical formulation. The formulation generates the optimal selection of components for a target product, which satisfies geometric, weight and interface constraints. The formulation is verified through a case study and experiments are performed to demonstrate the performance of the formulation.
Differential formulation of the gyrokinetic Landau operator
Hirvijoki, Eero; Brizard, Alain J.; Pfefferlé, David
2017-01-05
Subsequent to the recent rigorous derivation of an energetically consistent gyrokinetic collision operator in the so-called Landau representation, this work investigates the possibility of finding a differential formulation of the gyrokinetic Landau collision operator. It is observed that, while a differential formulation is possible in the gyrokinetic phase space, reduction of the resulting system of partial differential equations to five dimensions via gyroaveraging poses a challenge. Finally, based on the present work, it is likely that the gyrocentre analogues of the Rosenbluth–MacDonald–Judd potential functions must be kept gyroangle dependent.
34 CFR 691.16 - Rigorous secondary school program of study.
Code of Federal Regulations, 2010 CFR
2010-07-01
... MATHEMATICS ACCESS TO RETAIN TALENT GRANT (NATIONAL SMART GRANT) PROGRAMS Application Procedures § 691.16..., 2009. (Approved by the Office of Management and Budget under control number 1845-0078] (Authority: 20 U...
Rotation and anisotropy of galaxies revisited
NASA Astrophysics Data System (ADS)
Binney, James
2005-11-01
The use of the tensor virial theorem (TVT) as a diagnostic of anisotropic velocity distributions in galaxies is revisited. The TVT provides a rigorous global link between velocity anisotropy, rotation and shape, but the quantities appearing in it are not easily estimated observationally. Traditionally, use has been made of a centrally averaged velocity dispersion and the peak rotation velocity. Although this procedure cannot be rigorously justified, tests on model galaxies show that it works surprisingly well. With the advent of integral-field spectroscopy it is now possible to establish a rigorous connection between the TVT and observations. The TVT is reformulated in terms of sky-averages, and the new formulation is tested on model galaxies.
Holm, René; Olesen, Niels Erik; Alexandersen, Signe Dalgaard; Dahlgaard, Birgitte N; Westh, Peter; Mu, Huiling
2016-05-25
Preservatives are inactivated when added to conserve aqueous cyclodextrin (CD) formulations due to complex formation between CDs and the preservative. To maintain the desired conservation effect the preservative needs to be added in apparent surplus to account for this inactivation. The purpose of the present work was to establish a mathematical model, which defines this surplus based upon knowledge of stability constants and the minimal concentration of preservation to inhibit bacterial growth. The stability constants of benzoic acid, methyl- and propyl-paraben with different frequently used βCDs were determined by isothermal titration calorimetry. Based upon this knowledge mathematical models were constructed to account for the equilibrium systems and to calculate the required concentration of the preservations, which was evaluated experimentally based upon the USP/Ph. Eur./JP monograph. The mathematical calculations were able to predict the needed concentration of preservation in the presence of CDs; it clearly demonstrated the usefulness of including all underlying chemical equilibria in a mathematical model, such that the formulation design can be based on quantitative arguments. Copyright © 2015 Elsevier B.V. All rights reserved.
Ologs: a categorical framework for knowledge representation.
Spivak, David I; Kent, Robert E
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.
Ologs: A Categorical Framework for Knowledge Representation
Spivak, David I.; Kent, Robert E.
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research. PMID:22303434
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Local models of astrophysical discs
NASA Astrophysics Data System (ADS)
Latter, Henrik N.; Papaloizou, John
2017-12-01
Local models of gaseous accretion discs have been successfully employed for decades to describe an assortment of small-scale phenomena, from instabilities and turbulence, to dust dynamics and planet formation. For the most part, they have been derived in a physically motivated but essentially ad hoc fashion, with some of the mathematical assumptions never made explicit nor checked for consistency. This approach is susceptible to error, and it is easy to derive local models that support spurious instabilities or fail to conserve key quantities. In this paper we present rigorous derivations, based on an asympototic ordering, and formulate a hierarchy of local models (incompressible, Boussinesq and compressible), making clear which is best suited for a particular flow or phenomenon, while spelling out explicitly the assumptions and approximations of each. We also discuss the merits of the anelastic approximation, emphasizing that anelastic systems struggle to conserve energy unless strong restrictions are imposed on the flow. The problems encountered by the anelastic approximation are exacerbated by the disc's differential rotation, but also attend non-rotating systems such as stellar interiors. We conclude with a defence of local models and their continued utility in astrophysical research.
ERIC Educational Resources Information Center
Achieve, Inc., 2007
2007-01-01
At the request of the Hawaii Department of Education, Achieve conducted a study of Hawaii's 2005 grade 10 State Assessment in reading and mathematics. The study compared the content, rigor and passing (meets proficiency) scores on Hawaii's assessment with those of the six states that participated in Achieve's earlier study, "Do Graduation…
NASA Astrophysics Data System (ADS)
Putri, Arrival Rince; Nova, Tertia Delia; Watanabe, M.
2016-02-01
Bird flu infection processes within a poultry farm are formulated mathematically. A spatial effect is taken into account for the virus concentration with a diffusive term. An infection process is represented in terms of a traveling wave solutions. For a small removal rate, a singular perturbation analysis lead to existence of traveling wave solutions, that correspond to progressive infection in one direction.
Nine formulations of quantum mechanics
NASA Astrophysics Data System (ADS)
Styer, Daniel F.; Balkin, Miranda S.; Becker, Kathryn M.; Burns, Matthew R.; Dudley, Christopher E.; Forth, Scott T.; Gaumer, Jeremy S.; Kramer, Mark A.; Oertel, David C.; Park, Leonard H.; Rinkoski, Marie T.; Smith, Clait T.; Wotherspoon, Timothy D.
2002-03-01
Nine formulations of nonrelativistic quantum mechanics are reviewed. These are the wavefunction, matrix, path integral, phase space, density matrix, second quantization, variational, pilot wave, and Hamilton-Jacobi formulations. Also mentioned are the many-worlds and transactional interpretations. The various formulations differ dramatically in mathematical and conceptual overview, yet each one makes identical predictions for all experimental results.
Treatment of charge singularities in implicit solvent models.
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-21
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2 A for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
Treatment of charge singularities in implicit solvent models
NASA Astrophysics Data System (ADS)
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-01
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2Å for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Experimenting with Mathematical Biology
ERIC Educational Resources Information Center
Sanft, Rebecca; Walter, Anne
2016-01-01
St. Olaf College recently added a Mathematical Biology concentration to its curriculum. The core course, Mathematics of Biology, was redesigned to include a wet laboratory. The lab classes required students to collect data and implement the essential modeling techniques of formulation, implementation, validation, and analysis. The four labs…
ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.
Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K
2016-10-01
The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.
A Rigorous Geometric Derivation of the Chiral Anomaly in Curved Backgrounds
NASA Astrophysics Data System (ADS)
Bär, Christian; Strohmaier, Alexander
2016-11-01
We discuss the chiral anomaly for a Weyl field in a curved background and show that a novel index theorem for the Lorentzian Dirac operator can be applied to describe the gravitational chiral anomaly. A formula for the total charge generated by the gravitational and gauge field background is derived directly in Lorentzian signature and in a mathematically rigorous manner. It contains a term identical to the integrand in the Atiyah-Singer index theorem and another term involving the {η}-invariant of the Cauchy hypersurfaces.
NASA Astrophysics Data System (ADS)
Ballard, Patrick; Charles, Alexandre
2018-03-01
In the end of the seventies, Schatzman and Moreau undertook to revisit the venerable dynamics of rigid bodies with contact and dry friction in the light of more recent mathematics. One claimed objective was to reach, for the first time, a mathematically consistent formulation of an initial value problem associated with the dynamics. The purpose of this article is to make a review of the today state-of-art concerning not only the formulation, but also the issues of existence and uniqueness of solution. xml:lang="fr"
Lesovik, G B; Lebedev, A V; Sadovskyy, I A; Suslov, M V; Vinokur, V M
2016-09-12
Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy.
Qian, Ma; Ma, Jie
2009-06-07
Fletcher's spherical substrate model [J. Chem. Phys. 29, 572 (1958)] is a basic model for understanding the heterogeneous nucleation phenomena in nature. However, a rigorous thermodynamic formulation of the model has been missing due to the significant complexities involved. This has not only left the classical model deficient but also likely obscured its other important features, which would otherwise have helped to better understand and control heterogeneous nucleation on spherical substrates. This work presents a rigorous thermodynamic formulation of Fletcher's model using a novel analytical approach and discusses the new perspectives derived. In particular, it is shown that the use of an intermediate variable, a selected geometrical angle or pseudocontact angle between the embryo and spherical substrate, revealed extraordinary similarities between the first derivatives of the free energy change with respect to embryo radius for nucleation on spherical and flat substrates. Enlightened by the discovery, it was found that there exists a local maximum in the difference between the equivalent contact angles for nucleation on spherical and flat substrates due to the existence of a local maximum in the difference between the shape factors for nucleation on spherical and flat substrate surfaces. This helps to understand the complexity of the heterogeneous nucleation phenomena in a practical system. Also, it was found that the unfavorable size effect occurs primarily when R<5r( *) (R: radius of substrate and r( *): critical embryo radius) and diminishes rapidly with increasing value of R/r( *) beyond R/r( *)=5. This finding provides a baseline for controlling the size effects in heterogeneous nucleation.
Dual Treatments as Starting Point for Integrative Perceptions in Teaching Mathematics
ERIC Educational Resources Information Center
Kërënxhi, Svjetllana; Gjoci, Pranvera
2015-01-01
In this paper, we recommend mathematical teaching through dual treatments. The dual treatments notion, classified in dual interpretations, dual analyses, dual solutions, and dual formulations, is explained through concrete examples taken from mathematical textbooks of elementary education. Dual treatments provide opportunities for creating…
A survey on the measure of combat readiness
NASA Astrophysics Data System (ADS)
Wen, Kwong Fook; Nor, Norazman Mohamad; Soon, Lee Lai
2014-09-01
Measuring the combat readiness in military forces involves the measures of tangible and intangible elements of combat power. Though these measures are applicable, the mathematical models and formulae used focus mainly on either the tangible or the intangible elements. In this paper, a review is done to highlight the research gap in the formulation of a mathematical model that incorporates tangible elements with intangible elements to measure the combat readiness of a military force. It highlights the missing link between the tangible and intangible elements of combat power. To bridge the gap and missing link, a mathematical model could be formulated that measures both the tangible and intangible aspects of combat readiness by establishing the relationship between the causal (tangible and intangible) elements and its effects on the measure of combat readiness. The model uses multiple regression analysis as well as mathematical modeling and simulation which digest the capability component reflecting its assets and resources, the morale component reflecting human needs, and the quality of life component reflecting soldiers' state of satisfaction in life. The results of the review provide a mean to bridge the research gap through the formulation of a mathematical model that shows the total measure of a military force's combat readiness. The results also significantly identify parameters for each of the variables and factors in the model.
NASA Astrophysics Data System (ADS)
Savvinova, Nadezhda A.; Sleptsov, Semen D.; Rubtsov, Nikolai A.
2017-11-01
A mathematical phase change model is a formulation of the Stefan problem. Various formulations of the Stefan problem modeling of radiative-conductive heat transfer during melting or solidification of a semitransparent material are presented. Analysis of numerical results show that the radiative heat transfer has a significant effect on temperature distributions during melting (solidification) of the semitransparent material. In this paper conditions for application of various statements of the Stefan problem are analyzed.
On Double-Entry Bookkeeping: The Mathematical Treatment
ERIC Educational Resources Information Center
Ellerman, David
2014-01-01
Double-entry bookkeeping (DEB) implicitly uses a specific mathematical construction, the group of differences using pairs of unsigned numbers ("T-accounts"). That construction was only formulated abstractly in mathematics in the nineteenth century, even though DEB had been used in the business world for over five centuries. Yet the…
Mathematical Problem Solving. Issues in Research.
ERIC Educational Resources Information Center
Lester, Frank K., Jr., Ed.; Garofalo, Joe, Ed.
This set of papers was originally developed for a conference on Issues and Directions in Mathematics Problem Solving Research held at Indiana University in May 1981. The purpose is to contribute to the clear formulation of the key issues in mathematical problem-solving research by presenting the ideas of actively involved researchers. An…
NASA Astrophysics Data System (ADS)
Rohrlich, Fritz
2011-12-01
Classical and the quantum mechanical sciences are in essential need of mathematics. Only thus can the laws of nature be formulated quantitatively permitting quantitative predictions. Mathematics also facilitates extrapolations. But classical and quantum sciences differ in essential ways: they follow different laws of logic, Aristotelian and non-Aristotelian logics, respectively. These are explicated.
ERIC Educational Resources Information Center
Santos-Trigo, Manuel; Espinosa-Perez, Hugo; Reyes-Rodriguez, Aaron
2008-01-01
Different technological artefacts may offer distinct opportunities for students to develop resources and strategies to formulate, comprehend and solve mathematical problems. In particular, the use of dynamic software becomes relevant to assemble geometric configurations that may help students reconstruct and examine mathematical relationships. In…
Investigating the Impact of Field Trips on Teachers' Mathematical Problem Posing
ERIC Educational Resources Information Center
Courtney, Scott A.; Caniglia, Joanne; Singh, Rashmi
2014-01-01
This study examines the impact of field trip experiences on teachers' mathematical problem posing. Teachers from a large urban public school system in the Midwest participated in a professional development program that incorporated experiential learning with mathematical problem formulation experiences. During 2 weeks of summer 2011, 68 teachers…
Variable thickness transient ground-water flow model. Volume 1. Formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reisenauer, A.E.
1979-12-01
Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yurkin, Maxim A.
2017-01-01
Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.
Application of systematic review methodology to the field of nutrition
USDA-ARS?s Scientific Manuscript database
Systematic reviews represent a rigorous and transparent approach of synthesizing scientific evidence that minimizes bias. They evolved within the medical community to support development of clinical and public health practice guidelines, set research agendas and formulate scientific consensus state...
Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-08-25
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.
Students’ Mathematical Literacy in Solving PISA Problems Based on Keirsey Personality Theory
NASA Astrophysics Data System (ADS)
Masriyah; Firmansyah, M. H.
2018-01-01
This research is descriptive-qualitative research. The purpose is to describe students’ mathematical literacy in solving PISA on space and shape content based on Keirsey personality theory. The subjects are four junior high school students grade eight with guardian, artisan, rational or idealist personality. Data collecting methods used test and interview. Data of Keirsey Personality test, PISA test, and interview were analysed. Profile of mathematical literacy of each subject are described as follows. In formulating, guardian subject identified mathematical aspects are formula of rectangle area and sides length; significant variables are terms/conditions in problem and formula of ever encountered question; translated into mathematical language those are measurement and arithmetic operations. In employing, he devised and implemented strategies using ease of calculation on area-subtraction principle; declared truth of result but the reason was less correct; didn’t use and switch between different representations. In interpreting, he declared result as area of house floor; declared reasonableness according measurement estimation. In formulating, artisan subject identified mathematical aspects are plane and sides length; significant variables are solution procedure on both of daily problem and ever encountered question; translated into mathematical language those are measurement, variables, and arithmetic operations as well as symbol representation. In employing, he devised and implemented strategies using two design comparison; declared truth of result without reason; used symbol representation only. In interpreting, he expressed result as floor area of house; declared reasonableness according measurement estimation. In formulating, rational subject identified mathematical aspects are scale and sides length; significant variables are solution strategy on ever encountered question; translated into mathematical language those are measurement, variable, arithmetic operation as well as symbol and graphic representation. In employing, he devised and implemented strategies using additional plane forming on area-subtraction principle; declared truth of result according calculation process; used and switched between symbol and graphic representation. In interpreting, he declared result as house area within terrace and wall; declared reasonableness according measurement estimation. In formulating, idealist subject identified mathematical aspects are sides length; significant variables are terms/condition in problem; translated into mathematical language those are measurement, variables, arithmetic operations as well as symbol and graphic representation. In employing, he devised and implemented strategies using trial and error and two design in process of finding solutions; declared truth of result according the use of two design of solution; used and switched between symbol and graphic representation. In interpreting, he declared result as floor area of house; declared reasonableness according measurement estimation.
Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.
NASA Astrophysics Data System (ADS)
Velichkin, Vladimir A.; Zavyalov, Vladimir A.
2018-03-01
This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.
Problem formulation in the environmental risk assessment for genetically modified plants
Wolt, Jeffrey D.; Keese, Paul; Raybould, Alan; Burachik, Moisés; Gray, Alan; Olin, Stephen S.; Schiemann, Joachim; Sears, Mark; Wu, Felicia
2009-01-01
Problem formulation is the first step in environmental risk assessment (ERA) where policy goals, scope, assessment endpoints, and methodology are distilled to an explicitly stated problem and approach for analysis. The consistency and utility of ERAs for genetically modified (GM) plants can be improved through rigorous problem formulation (PF), producing an analysis plan that describes relevant exposure scenarios and the potential consequences of these scenarios. A properly executed PF assures the relevance of ERA outcomes for decision-making. Adopting a harmonized approach to problem formulation should bring about greater uniformity in the ERA process for GM plants among regulatory regimes globally. This paper is the product of an international expert group convened by the International Life Sciences Institute (ILSI) Research Foundation. PMID:19757133
ERIC Educational Resources Information Center
Adamu, L. E.
2015-01-01
The purpose of the study was to determine the relationship between scores in mathematics knowledge and teaching practice of Diploma mathematics students. A sample of 39 students was used. Two research questions and two hypotheses were asked and formulated respectively. An ex-post facto correlation design was used. The data were analyzed using…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lesovik, G. B.; Lebedev, A. V.; Sadovskyy, I. A.
Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. Lastly, we further demonstrate that the typicalmore » evolution of energy-isolated quantum systems occurs with non-diminishing entropy.« less
Lesovik, G. B.; Lebedev, A. V.; Sadovskyy, I. A.; Suslov, M. V.; Vinokur, V. M.
2016-01-01
Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy. PMID:27616571
Lesovik, G. B.; Lebedev, A. V.; Sadovskyy, I. A.; ...
2016-09-12
Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. Lastly, we further demonstrate that the typicalmore » evolution of energy-isolated quantum systems occurs with non-diminishing entropy.« less
Cross-Cultural Predictors of Mathematical Talent and Academic Productivity
ERIC Educational Resources Information Center
Nokelainen, Petri; Tirri, Kirsi; Campbell, James Reed
2004-01-01
The main goal of this paper is to investigate cross-cultural factors that predict academic ability among mathematically gifted Olympians in Finland and the United States. The following two research problems are formulated: (1) What factors contribute to or impede the development of the Olympians' mathematic talent? and (2) Do the Olympians fulfill…
GENERAL REPORT OF MATHEMATICS CONFERENCE AND TWO SPECIFIC REPORTS. (TITLE SUPPLIED).
ERIC Educational Resources Information Center
Educational Services, Inc., Watertown, MA.
THE FIRST PAPER, "REPORT OF MATHEMATICS CONFERENCE," IS A SUMMARY OF DISCUSSIONS BY 29 PARTICIPANTS IN A CONFERENCE ON CURRENT PROBLEMS IN MATHEMATICS EDUCATION RESEARCH. REPORTED ARE (1) RECENT PROGRESS, PROBLEMS, AND PLANS OF CURRICULUM DEVELOPMENT GROUPS, (2) GENERAL FORMULATION OF CURRICULUM AND METHODS, (3) TEACHER TRAINING, (4)…
Formulating the Fibonacci Sequence: Paths or Jumps in Mathematical Understanding.
ERIC Educational Resources Information Center
Kieren, Thomas; And Others
In dynamical theory, mathematical understanding is considered to be that of a person (or group) of a topic (or problem) in a situation or setting. This paper compares the interactions between the situations and the mathematical understandings of two students by comparing the growth in understanding within a Fibonacci sequence setting in which…
Mathematical Metaphors: Problem Reformulation and Analysis Strategies
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
This paper addresses the critical need for the development of intelligent or assisting software tools for the scientist who is working in the initial problem formulation and mathematical model representation stage of research. In particular, examples of that representation in fluid dynamics and instability theory are discussed. The creation of a mathematical model that is ready for application of certain solution strategies requires extensive symbolic manipulation of the original mathematical model. These manipulations can be as simple as term reordering or as complicated as discovery of various symmetry groups embodied in the equations, whereby Backlund-type transformations create new determining equations and integrability conditions or create differential Grobner bases that are then solved in place of the original nonlinear PDEs. Several examples are presented of the kinds of problem formulations and transforms that can be frequently encountered in model representation for fluids problems. The capability of intelligently automating these types of transforms, available prior to actual mathematical solution, is advocated. Physical meaning and assumption-understanding can then be propagated through the mathematical transformations, allowing for explicit strategy development.
Symmetry Properties of Potentiometric Titration Curves.
ERIC Educational Resources Information Center
Macca, Carlo; Bombi, G. Giorgio
1983-01-01
Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)
The KP Approximation Under a Weak Coriolis Forcing
NASA Astrophysics Data System (ADS)
Melinand, Benjamin
2018-02-01
In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.
A mathematical theorem as the basis for the second law: Thomson's formulation applied to equilibrium
NASA Astrophysics Data System (ADS)
Allahverdyan, A. E.; Nieuwenhuizen, Th. M.
2002-03-01
There are several formulations of the second law, and they may, in principle, have different domains of validity. Here a simple mathematical theorem is proven which serves as the most general basis for the second law, namely the Thomson formulation (“cyclic changes cost energy”), applied to equilibrium. This formulation of the second law is a property akin to particle conservation (normalization of the wave function). It has been strictly proven for a canonical ensemble, and made plausible for a micro-canonical ensemble. As the derivation does not assume time-inversion invariance, it is applicable to situations where persistent currents occur. This clear-cut derivation allows to revive the “no perpetuum mobile in equilibrium” formulation of the second law and to criticize some assumptions which are widespread in literature. The result puts recent results devoted to foundations and limitations of the second law in proper perspective, and structurizes this relatively new field of research.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-01-01
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631
Gordon, M. J. C.
2015-01-01
Robin Milner's paper, ‘The use of machines to assist in rigorous proof’, introduces methods for automating mathematical reasoning that are a milestone in the development of computer-assisted theorem proving. His ideas, particularly his theory of tactics, revolutionized the architecture of proof assistants. His methodology for automating rigorous proof soundly, particularly his theory of type polymorphism in programing, led to major contributions to the theory and design of programing languages. His citation for the 1991 ACM A.M. Turing award, the most prestigious award in computer science, credits him with, among other achievements, ‘probably the first theoretically based yet practical tool for machine assisted proof construction’. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750147
Towards a Unified Theory of Engineering Education
ERIC Educational Resources Information Center
Salcedo Orozco, Oscar H.
2017-01-01
STEM education is an interdisciplinary approach to learning where rigorous academic concepts are coupled with real-world lessons and activities as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling STEM literacy (Tsupros, Kohler and…
Evaluation, Instruction and Policy Making. IIEP Seminar Paper: 9.
ERIC Educational Resources Information Center
Bloom, Benjamin S.
Recently, educational evaluation has attempted to use the precision, objectivity, and mathematical rigor of the psychological measurement field as well as to find ways in which instrumentation and data utilization could more directly be related to educational institutions, educational processes, and educational purposes. The linkages between…
Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability
ERIC Educational Resources Information Center
Saragih, Sahat; Napitupulu, Elvis
2015-01-01
The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…
Mathematical methods for protein science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.; Istrail, S.; Atkins, J.
1997-12-31
Understanding the structure and function of proteins is a fundamental endeavor in molecular biology. Currently, over 100,000 protein sequences have been determined by experimental methods. The three dimensional structure of the protein determines its function, but there are currently less than 4,000 structures known to atomic resolution. Accordingly, techniques to predict protein structure from sequence have an important role in aiding the understanding of the Genome and the effects of mutations in genetic disease. The authors describe current efforts at Sandia to better understand the structure of proteins through rigorous mathematical analyses of simple lattice models. The efforts have focusedmore » on two aspects of protein science: mathematical structure prediction, and inverse protein folding.« less
On the convergence of the coupled-wave approach for lamellar diffraction gratings
NASA Technical Reports Server (NTRS)
Li, Lifeng; Haggans, Charles W.
1992-01-01
Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2002-01-01
The work presented here formulates the rigorous statistical basis for the correct estimation of communication link SNR of a BPSK, QPSK, and for that matter, any M-ary phase-modulated digital signal from what is known about its statistical behavior at the output of the receiver demodulator. Many methods to accomplish this have been proposed and implemented in the past but all of them are based on tacit and unwarranted assumptions and are thus defective. However, the basic idea is well founded, i.e., the signal at the output of a communications demodulator has convolved within it the prevailing SNR characteristic of the link. The acquisition of the SNR characteristic is of the utmost importance to a communications system that must remain reliable in adverse propagation conditions. This work provides a correct and consistent mathematical basis for the proper statistical 'deconvolution' of the output of a demodulator to yield a measure of the SNR. The use of such techniques will alleviate the need and expense for a separate propagation link to assess the propagation conditions prevailing on the communications link. Furthermore, they are applicable for every situation involving the digital transmission of data over planetary and space communications links.
Zeroth Law, Entropy, Equilibrium, and All That
NASA Astrophysics Data System (ADS)
Canagaratna, Sebastian G.
2008-05-01
The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the traditional approach to thermodynamics is discussed. It is shown that the traditional approach does not need to appeal to the second law to solve with rigor the type of problems discussed by Gislason and Craig: in problems not involving chemical reaction, the zeroth law and the condition for mechanical equilibrium, complemented by the first law and any necessary equations of state, are sufficient to determine the final state. We have to invoke the second law only if we wish to calculate the change of entropy. Since most students are exposed to a traditional approach to thermodynamics, the examples of Gislason and Craig are re-examined in terms of the traditional formulation. The maximization of the entropy in the final state can be verified in the traditional approach quite directly by the use of the fundamental equations of thermodynamics. This approach uses relatively simple mathematics in as general a setting as possible.
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Adaptive tracking control for active suspension systems with non-ideal actuators
NASA Astrophysics Data System (ADS)
Pan, Huihui; Sun, Weichao; Jing, Xingjian; Gao, Huijun; Yao, Jianyong
2017-07-01
As a critical component of transportation vehicles, active suspension systems are instrumental in the improvement of ride comfort and maneuverability. However, practical active suspensions commonly suffer from parameter uncertainties (e.g., the variations of payload mass and suspension component parameters), external disturbances and especially the unknown non-ideal actuators (i.e., dead-zone and hysteresis nonlinearities), which always significantly deteriorate the control performance in practice. To overcome these issues, this paper synthesizes an adaptive tracking control strategy for vehicle suspension systems to achieve suspension performance improvements. The proposed control algorithm is formulated by developing a unified framework of non-ideal actuators rather than a separate way, which is a simple yet effective approach to remove the unexpected nonlinear effects. From the perspective of practical implementation, the advantages of the presented controller for active suspensions include that the assumptions on the measurable actuator outputs, the prior knowledge of nonlinear actuator parameters and the uncertain parameters within a known compact set are not required. Furthermore, the stability of the closed-loop suspension system is theoretically guaranteed by rigorous mathematical analysis. Finally, the effectiveness of the presented adaptive control scheme is confirmed using comparative numerical simulation validations.
A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations
NASA Astrophysics Data System (ADS)
Lou, Yijun; Zhao, Xiao-Qiang
2017-04-01
There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.
Uncertainty and variability in computational and mathematical models of cardiac physiology.
Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H
2016-12-01
Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
A formulation of the foundations of genetics and evolution.
Bahr, Brian Edward
2016-05-01
This paper proposes a formulation of theories of the foundations of genetics and evolution that can be used to mathematically simulate phenotype expression, reproduction, mutation, and natural selection. It will be shown that Mendelian inheritance can be mathematically simulated with expressions involving matrices and that these expressions can also simulate phenomena that are modifications to Mendel's basic principles, like alleles that give rise to quantitative effects and traits that are the expression of multiple alleles and/or multiple genetic loci. Copyright © 2016 Elsevier Inc. All rights reserved.
Stress, deformation, conservation, and rheology: a survey of key concepts in continuum mechanics
Major, J.J.
2013-01-01
This chapter provides a brief survey of key concepts in continuum mechanics. It focuses on the fundamental physical concepts that underlie derivations of the mathematical formulations of stress, strain, hydraulic head, pore-fluid pressure, and conservation equations. It then shows how stresses are linked to strain and rates of distortion through some special cases of idealized material behaviors. The goal is to equip the reader with a physical understanding of key mathematical formulations that anchor continuum mechanics in order to better understand theoretical studies published in geomorphology.
Solving America's Math Problem
ERIC Educational Resources Information Center
Vigdor, Jacob
2013-01-01
Concern about students' math achievement is nothing new, and debates about the mathematical training of the nation's youth date back a century or more. In the early 20th century, American high-school students were starkly divided, with rigorous math courses restricted to a college-bound elite. At midcentury, the "new math" movement sought,…
2003-09-29
NanoTechnology and Metallurgy Belgrade 11000 Yugoslavia 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR/MONITOR’S ACRONYM(S)9...outlet annular tube I - ZONE I II - ZONE II 39 References: 1. Tayo Kaken Company, A means of reactivating worked charcoal , Japanese
A Novel Approach to Physiology Education for Biomedical Engineering Students
ERIC Educational Resources Information Center
DiCecco, J.; Wu, J.; Kuwasawa, K.; Sun, Y.
2007-01-01
It is challenging for biomedical engineering programs to incorporate an indepth study of the systemic interdependence of cells, tissues, and organs into the rigorous mathematical curriculum that is the cornerstone of engineering education. To be sure, many biomedical engineering programs require their students to enroll in anatomy and physiology…
ERIC Educational Resources Information Center
Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.
2011-01-01
The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…
Group Practices: A New Way of Viewing CSCL
ERIC Educational Resources Information Center
Stahl, Gerry
2017-01-01
The analysis of "group practices" can make visible the work of novices learning how to inquire in science or mathematics. These ubiquitous practices are invisibly taken for granted by adults, but can be observed and rigorously studied in adequate traces of online collaborative learning. Such an approach contrasts with traditional…
A Transformative Model for Undergraduate Quantitative Biology Education
ERIC Educational Resources Information Center
Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…
Exploring in Aeronautics. An Introduction to Aeronautical Sciences.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Cleveland, OH. Lewis Research Center.
This curriculum guide is based on a year of lectures and projects of a contemporary special-interest Explorer program intended to provide career guidance and motivation for promising students interested in aerospace engineering and scientific professions. The adult-oriented program avoids technicality and rigorous mathematics and stresses real…
Virginia's College and Career Readiness Initiative
ERIC Educational Resources Information Center
Virginia Department of Education, 2010
2010-01-01
In 1995, Virginia began a broad educational reform program that resulted in revised, rigorous content standards, the Virginia Standards of Learning (SOL), in the content areas of English, mathematics, science, and history and social science. These grade-by-grade and course-based standards were developed over 14 months with revision teams including…
Math Exchanges: Guiding Young Mathematicians in Small-Group Meetings
ERIC Educational Resources Information Center
Wedekind, Kassia Omohundro
2011-01-01
Traditionally, small-group math instruction has been used as a format for reaching children who struggle to understand. Math coach Kassia Omohundro Wedekind uses small-group instruction as the centerpiece of her math workshop approach, engaging all students in rigorous "math exchanges." The key characteristics of these mathematical conversations…
Zoos, Aquariums, and Expanding Students' Data Literacy
ERIC Educational Resources Information Center
Mokros, Jan; Wright, Tracey
2009-01-01
Zoo and aquarium educators are increasingly providing educationally rigorous programs that connect their animal collections with curriculum standards in mathematics as well as science. Partnering with zoos and aquariums is a powerful way for teachers to provide students with more opportunities to observe, collect, and analyze scientific data. This…
Models, Data, and War: a Critique of the Foundation for Defense Analyses.
1980-03-12
scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world
Psychoacoustic entropy theory and its implications for performance practice
NASA Astrophysics Data System (ADS)
Strohman, Gregory J.
This dissertation attempts to motivate, derive and imply potential uses for a generalized perceptual theory of musical harmony called psychoacoustic entropy theory. This theory treats the human auditory system as a physical system which takes acoustic measurements. As a result, the human auditory system is subject to all the appropriate uncertainties and limitations of other physical measurement systems. This is the theoretic basis for defining psychoacoustic entropy. Psychoacoustic entropy is a numerical quantity which indexes the degree to which the human auditory system perceives instantaneous disorder within a sound pressure wave. Chapter one explains the importance of harmonic analysis as a tool for performance practice. It also outlines the critical limitations for many of the most influential historical approaches to modeling harmonic stability, particularly when compared to available scientific research in psychoacoustics. Rather than analyze a musical excerpt, psychoacoustic entropy is calculated directly from sound pressure waves themselves. This frames psychoacoustic entropy theory in the most general possible terms as a theory of musical harmony, enabling it to be invoked for any perceivable sound. Chapter two provides and examines many widely accepted mathematical models of the acoustics and psychoacoustics of these sound pressure waves. Chapter three introduces entropy as a precise way of measuring perceived uncertainty in sound pressure waves. Entropy is used, in combination with the acoustic and psychoacoustic models introduced in chapter two, to motivate the mathematical formulation of psychoacoustic entropy theory. Chapter four shows how to use psychoacoustic entropy theory to analyze the certain types of musical harmonies, while chapter five applies the analytical tools developed in chapter four to two short musical excerpts to influence their interpretation. Almost every form of harmonic analysis invokes some degree of mathematical reasoning. However, the limited scope of most harmonic systems used for Western common practice music greatly simplifies the necessary level of mathematical detail. Psychoacoustic entropy theory requires a greater deal of mathematical complexity due to its sheer scope as a generalized theory of musical harmony. Fortunately, under specific assumptions the theory can take on vastly simpler forms. Psychoacoustic entropy theory appears to be highly compatible with the latest scientific research in psychoacoustics. However, the theory itself should be regarded as a hypothesis and this dissertation an experiment in progress. The evaluation of psychoacoustic entropy theory as a scientific theory of human sonic perception must wait for more rigorous future research.
Towards a wave theory of charged beam transport: A collection of thoughts
NASA Technical Reports Server (NTRS)
Dattoli, G.; Mari, C.; Torre, A.
1992-01-01
We formulate in a rigorous way a wave theory of charged beam linear transport. The Wigner distribution function is introduced and provides the link with classical mechanics. Finally, the von Neumann equation is shown to coincide with the Liouville equation for the nonlinear transport.
ERIC Educational Resources Information Center
Elgin, Catherine Z.
2013-01-01
Virtue epistemologists hold that knowledge results from the display of epistemic virtues--open-mindedness, rigor, sensitivity to evidence, and the like. But epistemology cannot rest satisfied with a list of the virtues. What is wanted is a criterion for being an epistemic virtue. An extension of a formulation of Kant's categorical imperative…
What Can Other Areas Teach Us about Numeracy?
ERIC Educational Resources Information Center
Ferme, Elizabeth
2014-01-01
Education professionals, regardless of their specialist area, are broadly aware of the importance of numeracy. Internationally, definitions of numeracy (known elsewhere as mathematical literacy or quantitative reasoning), describe "an individual's capacity to formulate, employ and interpret mathematics in a variety of contexts... reasoning…
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
A Framework of Mathematics Inductive Reasoning
ERIC Educational Resources Information Center
Christou, Constantinos; Papageorgiou, Eleni
2007-01-01
Based on a synthesis of the literature in inductive reasoning, a framework for prescribing and assessing mathematics inductive reasoning of primary school students was formulated and validated. The major constructs incorporated in this framework were students' cognitive abilities of finding similarities and/or dissimilarities among attributes and…
Test Anxiety and the Curriculum: The Subject Matters.
ERIC Educational Resources Information Center
Everson, Howard T.; And Others
College students' self-reported test anxiety levels in English, mathematics, physical science, and social science were compared to develop empirical support for the claim that students, in general, are more anxious about tests in rigorous academic subjects than in the humanities and to understand the curriculum-related sources of anxiety. It was…
Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory
ERIC Educational Resources Information Center
Andraos, John
2015-01-01
A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…
The Art of Learning: A Guide to Outstanding North Carolina Arts in Education Programs.
ERIC Educational Resources Information Center
Herman, Miriam L.
The Arts in Education programs delineated in this guide complement the rigorous arts curriculum taught by arts specialists in North Carolina schools and enable students to experience the joy of the creative process while reinforcing learning in other curricula: language arts, mathematics, social studies, science, and physical education. Programs…
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
High Standards Help Struggling Students: New Evidence. Charts You Can Trust
ERIC Educational Resources Information Center
Clark, Constance; Cookson, Peter W., Jr.
2012-01-01
The Common Core State Standards, adopted by 46 states and the District of Columbia, promise to raise achievement in English and mathematics through rigorous standards that promote deeper learning. But while most policymakers, researchers, and educators have embraced these higher standards, some question the fairness of raising the academic bar on…
Improving Mathematical Problem Solving in Grades 4 through 8. IES Practice Guide. NCEE 2012-4055
ERIC Educational Resources Information Center
Woodward, John; Beckmann, Sybilla; Driscoll, Mark; Franke, Megan; Herzig, Patricia; Jitendra, Asha; Koedinger, Kenneth R.; Ogbuehi, Philip
2012-01-01
The Institute of Education Sciences (IES) publishes practice guides in education to bring the best available evidence and expertise to bear on current challenges in education. Authors of practice guides combine their expertise with the findings of rigorous research, when available, to develop specific recommendations for addressing these…
NASA Technical Reports Server (NTRS)
Thomas-Keprta, Kathie L.; Clemett, Simon J.; Bazylinski, Dennis A.; Kirschvink, Joseph L.; McKay, David S.; Wentworth, Susan J.; Vali, H.; Gibson, Everett K.
2000-01-01
Here we use rigorous mathematical modeling to compare ALH84001 prismatic magnetites with those produced by terrestrial magnetotactic bacteria, MV-1. We find that this subset of the Martian magnetites appears to be statistically indistinguishable from those of MV-1.
Shaping Social Work Science: What Should Quantitative Researchers Do?
ERIC Educational Resources Information Center
Guo, Shenyang
2015-01-01
Based on a review of economists' debates on mathematical economics, this article discusses a key issue for shaping the science of social work--research methodology. The article describes three important tasks quantitative researchers need to fulfill in order to enhance the scientific rigor of social work research. First, to test theories using…
Louis Guttman's Contributions to Classical Test Theory
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.; Zumbo, Bruno D.; Ross, Donald
2005-01-01
This article focuses on Louis Guttman's contributions to the classical theory of educational and psychological tests, one of the lesser known of his many contributions to quantitative methods in the social sciences. Guttman's work in this field provided a rigorous mathematical basis for ideas that, for many decades after Spearman's initial work,…
ERIC Educational Resources Information Center
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate…
State College- and Career-Ready High School Graduation Requirements. Updated
ERIC Educational Resources Information Center
Achieve, Inc., 2013
2013-01-01
Research by Achieve, ACT, and others suggests that for high school graduates to be prepared for success in a wide range of postsecondary settings, they need to take four years of challenging mathematics--covering Advanced Algebra; Geometry; and data, probability, and statistics content--and four years of rigorous English aligned with college- and…
2006-12-01
DISTRIBUTION STATEMENT. ________//signature//________________ ________//signature//________________ PATRICK D. SULLIVAN, Ph.D., P.E. SANDRA R ...adsorber, at r =1.24 cm: (a) gas phase; (b) solid phase..................................................................................... 30 46 The...34 57 Axial profiles of the gas velocity during adsorption in the 2-cartridge adsorber at r =1.25cm..... 34 60
Mathematics Awareness through Technology, Teamwork, Engagement, and Rigor
ERIC Educational Resources Information Center
James, Laurie
2016-01-01
The purpose of this two-year observational study was to determine if the use of technology and intervention groups affected fourth-grade math scores. Specifically, the desire was to identify the percentage of students who met or exceeded grade-level standards on the state standardized test. This study indicated possible reasons that enhanced…
ERIC Educational Resources Information Center
McEvoy, Suzanne
2012-01-01
With the changing U.S. demographics, higher numbers of diverse, low-income, first-generation students are underprepared for the academic rigors of four-year institutions oftentimes requiring assistance, and remedial and/or developmental coursework in English and mathematics. Without intervention approaches these students are at high risk for…
ERIC Educational Resources Information Center
Ashley, Michael; Cooper, Katelyn M.; Cala, Jacqueline M.; Brownell, Sara E.
2017-01-01
Summer bridge programs are designed to help transition students into the college learning environment. Increasingly, bridge programs are being developed in science, technology, engineering, and mathematics (STEM) disciplines because of the rigorous content and lower student persistence in college STEM compared with other disciplines. However, to…
Visualizing, Rather than Deriving, Russell-Saunders Terms: A Classroom Activity with Quantum Numbers
ERIC Educational Resources Information Center
Coppo, Paolo
2016-01-01
A 1 h classroom activity is presented, aimed at consolidating the concepts of microstates and Russell-Saunders energy terms in transition metal atoms and coordination complexes. The unconventional approach, based on logic and intuition rather than rigorous mathematics, is designed to stimulate discussion and enhance familiarity with quantum…
Teaching the Concept of Breakdown Point in Simple Linear Regression.
ERIC Educational Resources Information Center
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
New Mathematical Strategy Using Branch and Bound Method
NASA Astrophysics Data System (ADS)
Tarray, Tanveer Ahmad; Bhat, Muzafar Rasool
In this paper, the problem of optimal allocation in stratified random sampling is used in the presence of nonresponse. The problem is formulated as a nonlinear programming problem (NLPP) and is solved using Branch and Bound method. Also the results are formulated through LINGO.
A Mathematical Formulation of the SCOLE Control Problem. Part 2: Optimal Compensator Design
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1988-01-01
The study initiated in Part 1 of this report is concluded and optimal feedback control (compensator) design for stability augmentation is considered, following the mathematical formulation developed in Part 1. Co-located (rate) sensors and (force and moment) actuators are assumed, and allowing for both sensor and actuator noise, stabilization is formulated as a stochastic regulator problem. Specializing the general theory developed by the author, a complete, closed form solution (believed to be new with this report) is obtained, taking advantage of the fact that the inherent structural damping is light. In particular, it is possible to solve in closed form the associated infinite-dimensional steady-state Riccati equations. The SCOLE model involves associated partial differential equations in a single space variable, but the compensator design theory developed is far more general since it is given in the abstract wave equation formulation. The results thus hold for any multibody system so long as the basic model is linear.
Marghetis, Tyler; Núñez, Rafael
2013-04-01
The canonical history of mathematics suggests that the late 19th-century "arithmetization" of calculus marked a shift away from spatial-dynamic intuitions, grounding concepts in static, rigorous definitions. Instead, we argue that mathematicians, both historically and currently, rely on dynamic conceptualizations of mathematical concepts like continuity, limits, and functions. In this article, we present two studies of the role of dynamic conceptual systems in expert proof. The first is an analysis of co-speech gesture produced by mathematics graduate students while proving a theorem, which reveals a reliance on dynamic conceptual resources. The second is a cognitive-historical case study of an incident in 19th-century mathematics that suggests a functional role for such dynamism in the reasoning of the renowned mathematician Augustin Cauchy. Taken together, these two studies indicate that essential concepts in calculus that have been defined entirely in abstract, static terms are nevertheless conceptualized dynamically, in both contemporary and historical practice. Copyright © 2013 Cognitive Science Society, Inc.
On the mathematical treatment of the Born-Oppenheimer approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr
2014-05-15
Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common usemore » of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.« less
Jitendra, Asha K; Petersen-Brown, Shawna; Lein, Amy E; Zaslofsky, Anne F; Kunkel, Amy K; Jung, Pyung-Gang; Egan, Andrea M
2015-01-01
This study examined the quality of the research base related to strategy instruction priming the underlying mathematical problem structure for students with learning disabilities and those at risk for mathematics difficulties. We evaluated the quality of methodological rigor of 18 group research studies using the criteria proposed by Gersten et al. and 10 single case design (SCD) research studies using criteria suggested by Horner et al. and the What Works Clearinghouse. Results indicated that 14 group design studies met the criteria for high-quality or acceptable research, whereas SCD studies did not meet the standards for an evidence-based practice. Based on these findings, strategy instruction priming the mathematics problem structure is considered an evidence-based practice using only group design methodological criteria. Implications for future research and for practice are discussed. © Hammill Institute on Disabilities 2013.
NASA Technical Reports Server (NTRS)
Bert, C. W.; Chang, S.
1972-01-01
Elastic and damping analyses resulting in determinations of the various stiffnesses and associated loss tangents for the complete characterization of the elastic and damping behavior of a monofilament composite layer are presented. For the determination of the various stiffnesses, either an elementary mechanics-of-materials formulation or a more rigorous mixed-boundary-value elasticity formulation is used. The solution for the latter formulation is obtained by means of the boundary-point least-square error technique. Kimball-Lovell type damping is assumed for each of the constituent materials. For determining the loss tangents associated with the various stiffnesses, either the viscoelastic correspondence principle or an energy analysis based on the appropriate elastic stress distribution is used.
The Value of Information in Distributed Decision Networks
2016-03-04
formulation, and then we describe the various results at- tained. 1 Mathematical description of Distributed Decision Network un- der Information...Constraints We now define a mathematical framework for networks. Let G = (V,E) be an undirected random network (graph) drawn from a known distribution pG, 1
Satellite orbit computation methods
NASA Technical Reports Server (NTRS)
1977-01-01
Mathematical and algorithmical techniques for solution of problems in satellite dynamics were developed, along with solutions to satellite orbit motion. Dynamical analysis of shuttle on-orbit operations were conducted. Computer software routines for use in shuttle mission planning were developed and analyzed, while mathematical models of atmospheric density were formulated.
Crystal Growth and Fluid Mechanics Problems in Directional Solidification
NASA Technical Reports Server (NTRS)
Tanveer, Saleh A.; Baker, Gregory R.; Foster, Michael R.
2001-01-01
Our work in directional solidification has been in the following areas: (1) Dynamics of dendrites including rigorous mathematical analysis of the resulting equations; (2) Examination of the near-structurally unstable features of the mathematically related Hele-Shaw dynamics; (3) Numerical studies of steady temperature distribution in a vertical Bridgman device; (4) Numerical study of transient effects in a vertical Bridgman device; (5) Asymptotic treatment of quasi-steady operation of a vertical Bridgman furnace for large Rayleigh numbers and small Biot number in 3D; and (6) Understanding of Mullins-Sererka transition in a Bridgman device with fluid dynamics is accounted for.
Manpower Substitution and Productivity in Medical Practice
Reinhardt, Uwe E.
1973-01-01
Probably in response to the often alleged physician shortage in this country, concerted research efforts are under way to identify technically feasible opportunities for manpower substitution in the production of ambulatory health care. The approaches range from descriptive studies of the effect of task delegation on output of medical services to rigorous mathematical modeling of health care production by means of linear or continuous production functions. In this article the distinct methodological approaches underlying mathematical models are presented in synopsis, and their inherent strengths and weaknesses are contrasted. The discussion includes suggestions for future research directions. Images Fig. 2 PMID:4586735
The transformation of aerodynamic stability derivatives by symbolic mathematical computation
NASA Technical Reports Server (NTRS)
Howard, J. C.
1975-01-01
The formulation of mathematical models of aeronautical systems for simulation or other purposes, involves the transformation of aerodynamic stability derivatives. It is shown that these derivatives transform like the components of a second order tensor having one index of covariance and one index of contravariance. Moreover, due to the equivalence of covariant and contravariant transformations in orthogonal Cartesian systems of coordinates, the transformations can be treated as doubly covariant or doubly contravariant, if this simplifies the formulation. It is shown that the tensor properties of these derivatives can be used to facilitate their transformation by symbolic mathematical computation, and the use of digital computers equipped with formula manipulation compilers. When the tensor transformations are mechanised in the manner described, man-hours are saved and the errors to which human operators are prone can be avoided.
Drug delivery in cancer using liposomes.
Dass, Crispin R
2008-01-01
There are various types of liposomes used for cancer therapy, but these can all be placed into three distinct categories based on the surface charge of vesicles: neutral, anionic and cationic. This chapter describes the more rigorous and easy methods used for liposome manufacture, with references, to aid the reader in preparing these formulations in-house.
A Mathematical Model and Algorithm for Routing Air Traffic Under Weather Uncertainty
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.
2016-01-01
A central challenge in managing today's commercial en route air traffic is the task of routing the aircraft in the presence of adverse weather. Such weather can make regions of the airspace unusable, so all affected flights must be re-routed. Today this task is carried out by conference and negotiation between human air traffic controllers (ATC) responsible for the involved sectors of the airspace. One can argue that, in so doing, ATC try to solve an optimization problem without giving it a precise quantitative formulation. Such a formulation gives the mathematical machinery for constructing and verifying algorithms that are aimed at solving the problem. This paper contributes one such formulation and a corresponding algorithm. The algorithm addresses weather uncertainty and has closed form, which allows transparent analysis of correctness, realism, and computational costs.
Integrated Formulation of Beacon-Based Exception Analysis for Multimissions
NASA Technical Reports Server (NTRS)
Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail
2003-01-01
Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,
Model Eliciting Activities: A Home Run
ERIC Educational Resources Information Center
Magiera, Marta T.
2013-01-01
An important goal of school mathematics is to enable students to formulate, approach, and refine problems beyond those they have studied, allowing them to organize and consolidate their mathematical thinking. To achieve this goal, students should be encouraged to develop expertise in a variety of areas, such as problem solving, reasoning and…
Two-fluid models of turbulence
NASA Technical Reports Server (NTRS)
Spalding, D. B.
1985-01-01
The defects of turbulence models are summarized and the importance of so-called nongradient diffusion in turbulent fluxes is discussed. The mathematical theory of the flow of two interpenetrating continua is reviewed, and the mathematical formulation of the two fluid model is outlined. Results from plane wake, axisymmetric jet, and combustion studies are shown.
Governing the Modern, Neoliberal Child through ICT Research in Mathematics Education
ERIC Educational Resources Information Center
Valero, Paola; Knijnik, Gelsa
2015-01-01
Research on the pedagogical uses of ICT for the learning of mathematics formulates cultural thesis about the desired subject of education and society, and thereby contribute to fabricate the rational, Modern, self-regulated, entrepreneurial neoliberal child. Using the Foucauldian notion of governmentality, the section Technology in the…
Some Fundamental Issues of Mathematical Simulation in Biology
NASA Astrophysics Data System (ADS)
Razzhevaikin, V. N.
2018-02-01
Some directions of simulation in biology leading to original formulations of mathematical problems are overviewed. Two of them are discussed in detail: the correct solvability of first-order linear equations with unbounded coefficients and the construction of a reaction-diffusion equation with nonlinear diffusion for a model of genetic wave propagation.
High-Order Entropy Stable Formulations for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.
2013-01-01
A systematic approach is presented for developing entropy stable (SS) formulations of any order for the Navier-Stokes equations. These SS formulations discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality. They are valid for smooth as well as discontinuous flows provided sufficient dissipation is added at shocks and discontinuities. Entropy stable formulations exist for all diagonal norm, summation-by-parts (SBP) operators, including all centered finite-difference operators, Legendre collocation finite-element operators, and certain finite-volume operators. Examples are presented using various entropy stable formulations that demonstrate the current state-of-the-art of these schemes.
Geometry of the perceptual space
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John
1999-09-01
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Soils as relative-age dating tools
Markewich, Helaine Walsh; Pavich, Milan J.; Wysocki, Douglas A.
2017-01-01
Soils develop at the earth's surface via multiple processes that act through time. Precluding burial or disturbance, soil genetic horizons form progressively and reflect the balance among formation processes, surface age, and original substrate composition. Soil morphology provides a key link between process and time (soil age), enabling soils to serve as both relative and numerical dating tools for geomorphic studies and landscape evolution. Five major factors define the contemporary state of all soils: climate, organisms, topography, parent material, and time. Soils developed on similar landforms and parent materials within a given landscape comprise what we term a soil/landform/substrate complex. Soils on such complexes that differ in development as a function of time represent a soil chronosequence. In a soil chronosequence, time constitutes the only independent formation factor; the other factors act through time. Time dictates the variations in soil development or properties (field or laboratory measured) on a soil/landform/substrate complex. Using a dataset within the chronosequence model, we can also formulate various soil development indices based upon one or a combination of soil properties, either for individual soil horizons or for an entire profile. When we evaluate soil data or soil indices mathematically, the resulting equation creates a chronofunction. Chronofunctions help quantify processes and mechanisms involved in soil development, and relate them mathematically to time. These rigorous kinds of comparisons among and within soil/landform complexes constitute an important tool for relative-age dating. After determining one or more absolute ages for a soil/landform complex, we can calculate quantitative soil formation, and or landform-development rates. Multiple dates for several complexes allow rate calculations for soil/landform-chronosequence development and soil-chronofunction calibration.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
NASA Astrophysics Data System (ADS)
Bovier, Anton
2006-06-01
Our mathematical understanding of the statistical mechanics of disordered systems is going through a period of stunning progress. This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, recent progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail. Comprehensive introduction to an active and fascinating area of research Clear exposition that builds to the state of the art in the mathematics of spin glasses Written by a well-known and active researcher in the field
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
ERIC Educational Resources Information Center
Roschelle, Jeremy; Murphy, Robert; Feng, Mingyu; Bakia, Marianne
2017-01-01
In a rigorous evaluation of ASSISTments as an online homework support conducted in the state of Maine, SRI International reported that "the intervention significantly increased student scores on an end-of-the-year standardized mathematics assessment as compared with a control group that continued with existing homework practices."…
ERIC Educational Resources Information Center
HARDWICK, ARTHUR LEE
AT THIS WORKSHOP OF INDUSTRIAL REPRESENTATIVE AND TECHNICAL EDUCATORS, A TECHNICIAN WAS DEFINED AS ONE WITH BROAD-BASED MATHEMATICAL AND SCIENTIFIC TRAINING AND WITH COMPETENCE TO SUPPORT PROFESSIONAL SYSTEMS, ENGINEERING, AND OTHER SCIENTIFIC PERSONNEL. HE SHOULD RECEIVE A RIGOROUS, 2-YEAR, POST SECONDARY EDUCATION ESPECIALLY DESIGNED FOR HIS…
What Can Graph Theory Tell Us about Word Learning and Lexical Retrieval?
ERIC Educational Resources Information Center
Vitevitch, Michael S.
2008-01-01
Purpose: Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of…
ERIC Educational Resources Information Center
Stone, James R., III; Alfeld, Corinne; Pearson, Donna
2008-01-01
Numerous high school students, including many who are enrolled in career and technical education (CTE) courses, do not have the math skills necessary for today's high-skill workplace or college entrance requirements. This study tests a model for enhancing mathematics instruction in five high school CTE programs (agriculture, auto technology,…
Slow off the Mark: Elementary School Teachers and the Crisis in STEM Education
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
Prospective teachers can typically obtain a license to teach elementary school without taking a rigorous college-level STEM class such as calculus, statistics, or chemistry, and without demonstrating a solid grasp of mathematics knowledge, scientific knowledge, or the nature of scientific inquiry. This is not a recipe for ensuring students have…
ERIC Educational Resources Information Center
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
Integrated model development for liquid fueled rocket propulsion systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
As detailed in the original statement of work, the objective of phase two of this research effort was to develop a general framework for rocket engine performance prediction that integrates physical principles, a rigorous mathematical formalism, component level test data, system level test data, and theory-observation reconciliation. Specific phase two development tasks are defined.
High School Graduation Requirements in a Time of College and Career Readiness. CSAI Report
ERIC Educational Resources Information Center
Center on Standards and Assessments Implementation, 2016
2016-01-01
Ensuring that students graduate high school prepared for college and careers has become a national priority in the last decade. To support this goal, states have adopted rigorous college and career readiness (CCR) standards in English language arts (ELA) and mathematics. Additionally, states have begun to require students to pass assessments, in…
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
Using Teacher Evaluation Reform and Professional Development to Support Common Core Assessments
ERIC Educational Resources Information Center
Youngs, Peter
2013-01-01
The Common Core State Standards Initiative, in its aim to align diverse state curricula and improve educational outcomes, calls for K-12 teachers in the United States to engage all students in mathematical problem solving along with reading and writing complex text through the use of rigorous academic content. Until recently, most teacher…
Problem solving in the borderland between mathematics and physics
NASA Astrophysics Data System (ADS)
Jensen, Jens Højgaard; Niss, Martin; Jankvist, Uffe Thomas
2017-01-01
The article addresses the problématique of where mathematization is taught in the educational system, and who teaches it. Mathematization is usually not a part of mathematics programs at the upper secondary level, but we argue that physics teaching has something to offer in this respect, if it focuses on solving so-called unformalized problems, where a major challenge is to formalize the problems in mathematics and physics terms. We analyse four concrete examples of unformalized problems for which the formalization involves different order of mathematization and applying physics to the problem, but all require mathematization. The analysis leads to the formulation of a model by which we attempt to capture the important steps of the process of solving unformalized problems by means of mathematization and physicalization.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
Imbedded-Fracture Formulation of THMC Processes in Fractured Media
NASA Astrophysics Data System (ADS)
Yeh, G. T.; Tsai, C. H.; Sung, R.
2016-12-01
Fractured media consist of porous materials and fracture networks. There exist four approaches to mathematically formulating THMC (Thermal-Hydrology-Mechanics-Chemistry) processes models in the system: (1) Equivalent Porous Media, (2) Dual Porosity or Dual Continuum, (3) Heterogeneous Media, and (4) Discrete Fracture Network. The first approach cannot explicitly explore the interactions between porous materials and fracture networks. The second approach introduces too many extra parameters (namely, exchange coefficients) between two media. The third approach may make the problems too stiff because the order of material heterogeneity may be too much. The fourth approach ignore the interaction between porous materials and fracture networks. This talk presents an alternative approach in which fracture networks are modeled with a lower dimension than the surrounding porous materials. Theoretical derivation of mathematical formulations will be given. An example will be illustrated to show the feasibility of this approach.
An advanced model of heat and mass transfer in the protective clothing - verification
NASA Astrophysics Data System (ADS)
Łapka, P.; Furmański, P.
2016-09-01
The paper presents an advanced mathematical and numerical models of heat and mass transfer in the multi-layers protective clothing and in elements of the experimental stand subjected to either high surroundings temperature or high radiative heat flux emitted by hot objects. The model included conductive-radiative heat transfer in the hygroscopic porous fabrics and air gaps as well as conductive heat transfer in components of the stand. Additionally, water vapour diffusion in the pores and air spaces as well as phase transition of the bound water in the fabric fibres (sorption and desorption) were accounted for. The thermal radiation was treated in the rigorous way e.g.: semi-transparent absorbing, emitting and scattering fabrics were assumed a non-grey and all optical phenomena at internal or external walls were modelled. The air was assumed transparent. Complex energy and mass balance as well as optical conditions at internal or external interfaces were formulated in order to find exact values of temperatures, vapour densities and radiation intensities at these interfaces. The obtained highly non-linear coupled system of discrete equation was solve by the in-house iterative algorithm which was based on the Finite Volume Method. The model was then successfully partially verified against the results obtained from commercial software for simplified cases.
On a difficulty in eigenfunction expansion solutions for the start-up of fluid flow
NASA Astrophysics Data System (ADS)
Christov, Ivan C.
2015-11-01
Most mathematics and engineering textbooks describe the process of ``subtracting off'' the steady state of a linear parabolic partial differential equation as a technique for obtaining a boundary-value problem with homogeneous boundary conditions that can be solved by separation of variables (i.e., eigenfunction expansions). While this method produces the correct solution for the start-up of the flow of, e.g., a Newtonian fluid between parallel plates, it can lead to erroneous solutions to the corresponding problem for a class of non-Newtonian fluids. We show that the reason for this is the non-rigorous enforcement of the start-up condition in the textbook approach, which leads to a violation of the principle of causality. Nevertheless, these boundary-value problems can be solved correctly using eigenfunction expansions, and we present the formulation that makes this possible (in essence, an application of Duhamel's principle). The solutions obtained by this new approach are shown to agree identically with those obtained by using the Laplace transform in time only, a technique that enforces the proper start-up condition implicitly (hence, the same error cannot be committed). Supported, in part, by NSF Grant DMS-1104047 and the U.S. DOE (Contract No. DE-AC52-06NA25396) through the LANL/LDRD Program.
Jia, Jianhua; Liu, Zi; Xiao, Xuan; Liu, Bingxiang; Chou, Kuo-Chen
2016-04-07
Being one type of post-translational modifications (PTMs), protein lysine succinylation is important in regulating varieties of biological processes. It is also involved with some diseases, however. Consequently, from the angles of both basic research and drug development, we are facing a challenging problem: for an uncharacterized protein sequence having many Lys residues therein, which ones can be succinylated, and which ones cannot? To address this problem, we have developed a predictor called pSuc-Lys through (1) incorporating the sequence-coupled information into the general pseudo amino acid composition, (2) balancing out skewed training dataset by random sampling, and (3) constructing an ensemble predictor by fusing a series of individual random forest classifiers. Rigorous cross-validations indicated that it remarkably outperformed the existing methods. A user-friendly web-server for pSuc-Lys has been established at http://www.jci-bioinfo.cn/pSuc-Lys, by which users can easily obtain their desired results without the need to go through the complicated mathematical equations involved. It has not escaped our notice that the formulation and approach presented here can also be used to analyze many other problems in computational proteomics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical shear lag model - unraveling the size effect in hierarchical composites.
Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D
2015-05-01
Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Dynamic crack propagation in a 2D elastic body: The out-of-plane case
NASA Astrophysics Data System (ADS)
Nicaise, Serge; Sandig, Anna-Margarete
2007-05-01
Already in 1920 Griffith has formulated an energy balance criterion for quasistatic crack propagation in brittle elastic materials. Nowadays, a generalized energy balance law is used in mechanics [F. Erdogan, Crack propagation theories, in: H. Liebowitz (Ed.), Fracture, vol. 2, Academic Press, New York, 1968, pp. 498-586; L.B. Freund, Dynamic Fracture Mechanics, Cambridge Univ. Press, Cambridge, 1990; D. Gross, Bruchmechanik, Springer-Verlag, Berlin, 1996] in order to predict how a running crack will grow. We discuss this situation in a rigorous mathematical way for the out-of-plane state. This model is described by two coupled equations in the reference configuration: a two-dimensional scalar wave equation for the displacement fields in a cracked bounded domain and an ordinary differential equation for the crack position derived from the energy balance law. We handle both equations separately, assuming at first that the crack position is known. Then the weak and strong solvability of the wave equation will be studied and the crack tip singularities will be derived under the assumption that the crack is straight and moves tangentially. Using the energy balance law and the crack tip behavior of the displacement fields we finally arrive at an ordinary differential equation for the motion of the crack tip.
Covariant path integrals on hyperbolic surfaces
NASA Astrophysics Data System (ADS)
Schaefer, Joe
1997-11-01
DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).
Mishchenko, Michael I
2017-10-01
The majority of previous studies of the interaction of individual particles and multi-particle groups with electromagnetic field have focused on either elastic scattering in the presence of an external field or self-emission of electromagnetic radiation. In this paper we apply semi-classical fluctuational electrodynamics to address the ubiquitous scenario wherein a fixed particle or a fixed multi-particle group is exposed to an external quasi-polychromatic electromagnetic field as well as thermally emits its own electromagnetic radiation. We summarize the main relevant axioms of fluctuational electrodynamics, formulate in maximally rigorous mathematical terms the general scattering-emission problem for a fixed object, and derive such fundamental corollaries as the scattering-emission volume integral equation, the Lippmann-Schwinger equation for the dyadic transition operator, the multi-particle scattering-emission equations, and the far-field limit. We show that in the framework of fluctuational electrodynamics, the computation of the self-emitted component of the total field is completely separated from that of the elastically scattered field. The same is true of the computation of the emitted and elastically scattered components of quadratic/bilinear forms in the total electromagnetic field. These results pave the way to the practical computation of relevant optical observables.
Cosmopolitanism and Peace in Kant's Essay on "Perpetual Peace"
ERIC Educational Resources Information Center
Huggler, Jorgen
2010-01-01
Immanuel Kant's essay on Perpetual Peace (1795/96) contains a rejection of the idea of a world government (earlier advocated by Kant himself). In connexion with a substantial argument for cosmopolitan rights based on the human body and its need for a space on the surface of the Earth, Kant presents the most rigorous philosophical formulation ever…
Teaching Students to Formulate Questions
ERIC Educational Resources Information Center
Jensen-Vallin, Jacqueline
2017-01-01
As STEM educators, we know it is beneficial to train students to think critically and mathematically during their early mathematical lives. To this end, the author teaches the College Algebra/Precalculus course in a flipped classroom version of an inquiry-based learning style. However, the techniques described in this paper can be applied to a…
The Mathematics of High School Physics: Models, Symbols, Algorithmic Operations and Meaning
ERIC Educational Resources Information Center
Kanderakis, Nikos
2016-01-01
In the seventeenth and eighteenth centuries, mathematicians and physical philosophers managed to study, via mathematics, various physical systems of the sublunar world through idealized and simplified models of these systems, constructed with the help of geometry. By analyzing these models, they were able to formulate new concepts, laws and…
Great Lakes modeling: Are the mathematics outpacing the data and our understanding of the system?
Mathematical modeling in the Great Lakes has come a long way from the pioneering work done by Manhattan College in the 1970s, when the models operated on coarse computational grids (often lake-wide) and used simple eutrophication formulations. Moving forward 40 years, we are now...
The Force-Frequency Relationship: Insights from Mathematical Modeling
ERIC Educational Resources Information Center
Puglisi, Jose L.; Negroni, Jorge A.; Chen-Izu, Ye; Bers, Donald M.
2013-01-01
The force-frequency relationship has intrigued researchers since its discovery by Bowditch in 1871. Many attempts have been made to construct mathematical descriptions of this phenomenon, beginning with the simple formulation of Koch-Wesser and Blinks in 1963 to the most sophisticated ones of today. This property of cardiac muscle is amplified by…
Watching Sandy's Understanding Grow.
ERIC Educational Resources Information Center
Pirie, Susan E. B.; Kieren, Thomas E.
1992-01-01
Reviews recent research in the area of mathematical understanding and compares and contrasts it with a model formulated for the growth of understanding. Uses the analysis of a transcript from an interview with an eight-year-old boy to illustrate the power of the model to describe and map the growth of his mathematical understanding. (18…
The Effects of Mathematical Modelling on Students' Achievement-Meta-Analysis of Research
ERIC Educational Resources Information Center
Sokolowski, Andrzej
2015-01-01
Using meta-analytic techniques this study examined the effects of applying mathematical modelling to support student math knowledge acquisition at the high school and college levels. The research encompassed experimental studies published in peer-reviewed journals between January 1, 2000, and February 27, 2013. Such formulated orientation called…
NASA Technical Reports Server (NTRS)
Sadler, S. G.
1972-01-01
A mathematical model and computer program were implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. The theoretical formulation and analysis of results are presented.
A chance constraint estimation approach to optimizing resource management under uncertainty
Michael Bevers
2007-01-01
Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...
Variational formulation for Black-Scholes equations in stochastic volatility models
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Valkov, Radoslav L.
2012-11-01
In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V.
2009-12-01
Bush Physics for the 21st Century brings physics that is culturally connected, engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is currently delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Practical, culturally relevant kinetic examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiments are delivered to the students with the exercises completed on-line. In addition, students are teamed and required to perform a much more involved experimental study with the results presented by teams at the conclusion of the course. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. Greek symbols are traditionally used in equations, however, to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the concepts we have introduced Inuktitut symbols to complement the traditional Greek symbols. Results and observations from the first two pilot semesters (spring 2008 and 2009) will be presented.
NASA Astrophysics Data System (ADS)
Solie, D. J.; Spencer, V. K.
2010-12-01
Bush Physics for the 21st Century brings physics that is engaging to modern youth, and mathematically rigorous, to high school and college students in the remote and often road-less villages of Alaska where the opportunity to take a physics course has been nearly nonexistent. The primary goal of the course is to prepare rural (predominantly Alaska Native) students for success in university science and engineering degree programs and ultimately STEM careers. The course is delivered via video conference and web based electronic blackboard tailored to the needs of remote students. Kinetic, practical and culturally relevant place-based examples from traditional and modern northern life are used to engage students, and a rigorous and mathematical focus is stressed to strengthen problem solving skills. Simple hands-on-lab experiment kits are shipped to the students. In addition students conduct a Collaborative Research Experiment where they coordinate times of sun angle measurements with teams in other villages to determine their latitude and longitude as well as an estimate of the circumference of the earth. Connecting abstract mathematical symbols and equations to real physical objects and problems is one of the most difficult things to master in physics. We introduce Inuktitut symbols to complement the traditional Greek symbols in equations to strengthen the visual/conceptual connection with symbol and encourage an indigenous connection to the physical concepts. Results and observations from the first three pilot semesters (spring 2008, 2009 and 2010) will be presented.
A Mathematical Account of the NEGF Formalism
NASA Astrophysics Data System (ADS)
Cornean, Horia D.; Moldoveanu, Valeriu; Pillet, Claude-Alain
2018-02-01
The main goal of this paper is to put on solid mathematical grounds the so-called Non-Equilibrium Green's Function (NEGF) transport formalism for open systems. In particular, we derive the Jauho-Meir-Wingreen formula for the time-dependent current through an interacting sample coupled to non-interacting leads. Our proof is non-perturbative and uses neither complex-time Keldysh contours, nor Langreth rules of 'analytic continuation'. We also discuss other technical identities (Langreth, Keldysh) involving various many body Green's functions. Finally, we study the Dyson equation for the advanced/retarded interacting Green's function and we rigorously construct its (irreducible) self-energy, using the theory of Volterra operators.
NASA Astrophysics Data System (ADS)
Riendeau, Diane
2012-09-01
To date, this column has presented videos to show in class, Don Mathieson from Tulsa Community College suggested that YouTube could be used in another fashion. In Don's experience, his students are not always prepared for the mathematic rigor of his course. Even at the high school level, math can be a barrier for physics students. Walid Shihabi, a colleague of Don's, decided to compile a list of YouTube videos that his students could watch to relearn basic mathematics. I thought this sounded like a fantastic idea and a great service to the students. Walid graciously agreed to share his list and I have reproduced a large portion of it below.
Differential equations with applications in cancer diseases.
Ilea, M; Turnea, M; Rotariu, M
2013-01-01
Mathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer. The vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain. MATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region. Many mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.
Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics
NASA Astrophysics Data System (ADS)
Woods, Mischa
2013-03-01
This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation
ERIC Educational Resources Information Center
Eisenhart, Margaret; Weis, Lois; Allen, Carrie D.; Cipollone, Kristin; Stich, Amy; Dominguez, Rachel
2015-01-01
In response to numerous calls for more rigorous STEM (science, technology, engineering, and mathematics) education to improve US competitiveness and the job prospects of next-generation workers, especially those from low-income and minority groups, a growing number of schools emphasizing STEM have been established in the US over the past decade.…
ERIC Educational Resources Information Center
van der Scheer, Emmelien A.; Visscher, Adrie J.
2018-01-01
Data-based decision making (DBDM) is an important element of educational policy in many countries, as it is assumed that student achievement will improve if teachers worked in a data-based way. However, studies that evaluate rigorously the effects of DBDM on student achievement are scarce. In this study, the effects of an intensive…
ERIC Educational Resources Information Center
Randel, Bruce; Beesley, Andrea D.; Apthorp, Helen; Clark, Tedra F.; Wang, Xin; Cicchinelli, Louis F.; Williams, Jean M.
2011-01-01
This study was conducted by the Central Region Educational Laboratory (REL Central) administered by Mid-continent Research for Education and Learning to provide educators and policymakers with rigorous evidence about the potential of Classroom Assessment for Student Learning (CASL) to improve student achievement. CASL is a widely used professional…
How PARCC's False Rigor Stunts the Academic Growth of All Students. White Paper No. 135
ERIC Educational Resources Information Center
McQuillan, Mark; Phelps, Richard P.; Stotsky, Sandra
2015-01-01
In July 2010, the Massachusetts Board of Elementary and Secondary Education (BESE) voted to adopt Common Core's standards in English language arts (ELA) and mathematics in place of the state's own standards in these two subjects. The vote was based largely on recommendations by Commissioner of Education Mitchell Chester and then Secretary of…
ERIC Educational Resources Information Center
Courtade, Ginevra R.; Shipman, Stacy D.; Williams, Rachel
2017-01-01
SPLASH is a 3-year professional development program designed to work with classroom teachers of students with moderate and severe disabilities. The program targets new teachers and employs methods aimed at supporting rural classrooms. The training content focuses on evidence-based practices in English language arts, mathematics, and science, as…
Results of the Salish Projects: Summary and Implications for Science Teacher Education
ERIC Educational Resources Information Center
Yager, Robert E.; Simmons, Patricia
2013-01-01
Science teaching and teacher education in the U.S.A. have been of great national interest recently due to a severe shortage of science (and mathematics) teachers who do not hold strong qualifications in their fields of study. Unfortunately we lack a rigorous research base that helps inform solid practices about various models or elements of…
ERIC Educational Resources Information Center
Stoneberg, Bert D.
2015-01-01
The National Center of Education Statistics conducted a mapping study that equated the percentage proficient or above on each state's NCLB reading and mathematics tests in grades 4 and 8 to the NAEP scale. Each "NAEP equivalent score" was labeled according to NAEP's achievement levels and used to compare state proficiency standards and…
ERIC Educational Resources Information Center
Amador-Lankster, Clara
2018-01-01
The purpose of this article is to discuss a Fulbright Evaluation Framework and to analyze findings resulting from implementation of two contextualized measures designed as LEARNING BY DOING in response to achievement expectations from the National Education Ministry in Colombia in three areas. The goal of the Fulbright funded project was to…
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Jones index, secret sharing and total quantum dimension
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter; Osborne, Tobias J.
2017-02-01
We study the total quantum dimension in the thermodynamic limit of topologically ordered systems. In particular, using the anyons (or superselection sectors) of such models, we define a secret sharing scheme, storing information invisible to a malicious party, and argue that the total quantum dimension quantifies how well we can perform this task. We then argue that this can be made mathematically rigorous using the index theory of subfactors, originally due to Jones and later extended by Kosaki and Longo. This theory provides us with a ‘relative entropy’ of two von Neumann algebras and a quantum channel, and we argue how these can be used to quantify how much classical information two parties can hide form an adversary. We also review the total quantum dimension in finite systems, in particular how it relates to topological entanglement entropy. It is known that the latter also has an interpretation in terms of secret sharing schemes, although this is shown by completely different methods from ours. Our work provides a different and independent take on this, which at the same time is completely mathematically rigorous. This complementary point of view might be beneficial, for example, when studying the stability of the total quantum dimension when the system is perturbed.
Mathematics Education and the Objectivist Programme in HPS
NASA Astrophysics Data System (ADS)
Glas, Eduard
2013-06-01
Using history of mathematics for studying concepts, methods, problems and other internal features of the discipline may give rise to a certain tension between descriptive adequacy and educational demands. Other than historians, educators are concerned with mathematics as a normatively defined discipline. Teaching cannot but be based on a pre-understanding of what mathematics `is' or, in other words, on a normative (methodological, philosophical) view of the identity or nature of the discipline. Educators are primarily concerned with developments at the level of objective mathematical knowledge, that is: with the relations between successive theories, problems and proposed solutions—relations which are independent of whatever has been the role of personal or collective beliefs, convictions, traditions and other historical circumstances. Though not exactly `historical' in the usual sense, I contend that this `objectivist' approach does represent one among other entirely legitimate and valuable approaches to the historical development of mathematics. Its retrospective importance to current practitioners and students is illustrated by a reconstruction of the development of Eudoxus's theory of proportionality in response to the problem of irrationality, and the way in which Dedekind some two millennia later almost literally used this ancient theory for the rigorous introduction of irrational numbers and hence of the real number continuum.
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.
Simic, Vladimir
2016-06-01
As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
ERIC Educational Resources Information Center
Sriraman, Bharath
2003-01-01
Nine freshmen in a ninth-grade accelerated algebra class were asked to solve five nonroutine combinatorial problems. The four mathematically gifted students were successful in discovering and verbalizing the generality that characterized the solutions to the five problems, whereas the five nongifted students were unable to discover the hidden…
Computer-Aided Assessment Questions in Engineering Mathematics Using "MapleTA"[R
ERIC Educational Resources Information Center
Jones, I. S.
2008-01-01
The use of "MapleTA"[R] in the assessment of engineering mathematics at Liverpool John Moores University (JMU) is discussed with particular reference to the design of questions. Key aspects in the formulation and coding of questions are considered. Problems associated with the submission of symbolic answers, the use of randomly generated numbers…
ERIC Educational Resources Information Center
Perry, Bob; Gervasoni, Ann; Dockett, Sue
2012-01-01
The "Let's Count" pilot early mathematics program was implemented in five early childhood educational contexts across Australia during 2011. The program used specifically formulated materials and workshops to enlist the assistance of early childhood educators to work with parents and other family members of children in their settings to…
Challenges of Blended E-Learning Tools in Mathematics: Students' Perspectives University of Uyo
ERIC Educational Resources Information Center
Umoh, Joseph B.; Akpan, Ekemini T.
2014-01-01
An in-depth knowledge of pedagogical approaches can help improve the formulation of effective and efficient pedagogy, tools and technology to support and enhance the teaching and learning of Mathematics in higher institutions. This study investigated students' perceptions of the challenges of blended e-learning tools in the teaching and learning…
NASA Astrophysics Data System (ADS)
Moretti, Valter; Oppio, Marco
As earlier conjectured by several authors and much later established by Solèr (relying on partial results by Piron, Maeda-Maeda and other authors), from the lattice theory point of view, Quantum Mechanics may be formulated in real, complex or quaternionic Hilbert spaces only. Stückelberg provided some physical, but not mathematically rigorous, reasons for ruling out the real Hilbert space formulation, assuming that any formulation should encompass a statement of Heisenberg principle. Focusing on this issue from another — in our opinion, deeper — viewpoint, we argue that there is a general fundamental reason why elementary quantum systems are not described in real Hilbert spaces. It is their basic symmetry group. In the first part of the paper, we consider an elementary relativistic system within Wigner’s approach defined as a locally-faithful irreducible strongly-continuous unitary representation of the Poincaré group in a real Hilbert space. We prove that, if the squared-mass operator is non-negative, the system admits a natural, Poincaré invariant and unique up to sign, complex structure which commutes with the whole algebra of observables generated by the representation itself. This complex structure leads to a physically equivalent reformulation of the theory in a complex Hilbert space. Within this complex formulation, differently from what happens in the real one, all selfadjoint operators represent observables in accordance with Solèr’s thesis, and the standard quantum version of Noether theorem may be formulated. In the second part of this work, we focus on the physical hypotheses adopted to define a quantum elementary relativistic system relaxing them on the one hand, and making our model physically more general on the other hand. We use a physically more accurate notion of irreducibility regarding the algebra of observables only, we describe the symmetries in terms of automorphisms of the restricted lattice of elementary propositions of the quantum system and we adopt a notion of continuity referred to the states viewed as probability measures on the elementary propositions. Also in this case, the final result proves that there exists a unique (up to sign) Poincaré invariant complex structure making the theory complex and completely fitting into Solèr’s picture. This complex structure reveals a nice interplay of Poincaré symmetry and the classification of the commutant of irreducible real von Neumann algebras.
Resource Management for the Tagged Token Dataflow Architecture.
1985-01-01
completely rigorous, formulation of the U- intepreter . 2The graph schemata presented here differ slightly from those presented in the references...Director Dr. E.B. Royce, Code 38 1 Copy Head, Research Department Naval Weapons Center China Lake, CA 93555 Dr. G. Hopper, USNR 1 Ccpy NAVDAC-OOH .O Department of the Navy " - Washington, DC 20374 .. 0 " FILMED 7-85 DTIC
ERIC Educational Resources Information Center
Cozza, Stephen J.; Lerner, Richard M.; Haskins, Ron
2014-01-01
This "Social Policy Report" summarizes what is currently known about our nation's military children and families and presents ideas and proposals pertinent to the formulation of new programs and the policies that would create and sustain these initiatives. We emphasize the need for future rigorous developmental research about military…
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less
Origin of the spike-timing-dependent plasticity rule
NASA Astrophysics Data System (ADS)
Cho, Myoung Won; Choi, M. Y.
2016-08-01
A biological synapse changes its efficacy depending on the difference between pre- and post-synaptic spike timings. Formulating spike-timing-dependent interactions in terms of the path integral, we establish a neural-network model, which makes it possible to predict relevant quantities rigorously by means of standard methods in statistical mechanics and field theory. In particular, the biological synaptic plasticity rule is shown to emerge as the optimal form for minimizing the free energy. It is further revealed that maximization of the entropy of neural activities gives rise to the competitive behavior of biological learning. This demonstrates that statistical mechanics helps to understand rigorously key characteristic behaviors of a neural network, thus providing the possibility of physics serving as a useful and relevant framework for probing life.
IRT Models for Ability-Based Guessing
ERIC Educational Resources Information Center
Martin, Ernesto San; del Pino, Guido; De Boeck, Paul
2006-01-01
An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…
Rigorous derivation of porous-media phase-field equations
NASA Astrophysics Data System (ADS)
Schmuck, Markus; Kalliadasis, Serafim
2017-11-01
The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.
Sloshing dynamics on rotating helium dewar tank
NASA Technical Reports Server (NTRS)
Hung, R. J.
1993-01-01
The generalized mathematical formulation of sloshing dynamics for partially filled liquid of cryogenic superfluid helium II in dewar containers driven by both the gravity gradient and jitter accelerations applicable to scientific spacecraft which is eligible to carry out spinning motion and/or slew motion for the purpose to perform scientific observation during the normal spacecraft operation are investigated. An example is given with Gravity Probe-B (GP-B) spacecraft which is responsible for the sloshing dynamics. The jitter accelerations include slew motion, spinning motion, atmospheric drag on the spacecraft, spacecraft attitude motions arising from machinery vibrations, thruster firing, pointing control of spacecraft, crew motion, etc. Explicit mathematical expressions to cover these forces acting on the spacecraft fluid systems are derived. The numerical computation of sloshing dynamics were based on the non-inertia frame spacecraft bound coordinate, and solve time dependent, three-dimensional formulations of partial differential equations subject to initial and boundary conditions. The explicit mathematical expressions of boundary conditions to cover capillary force effect on the liquid vapor interface in microgravity environments are also derived. The formulations of fluid moment and angular moment fluctuations in fluid profiles induced by the sloshing dynamics, together with fluid stress and moment fluctuations exerted on the spacecraft dewar containers were derived. Results were widely published in the open journals.
Numerical studies of the surface tension effect of cryogenic liquid helium
NASA Technical Reports Server (NTRS)
Hung, R. J.
1994-01-01
The generalized mathematical formulation of sloshing dynamics for partially filled liquid of cryogenic superfluid helium II in dewar containers driven by both the gravity gradient and jitter accelerations applicable to scientific spacecraft which is eligible to carry out spinning motion and/or slew motion for the purpose of performing scientific observation during the normal spacecraft operation is investigated. An example is given with Gravity Probe-B (GP-B) spacecraft which is responsible for the sloshing dynamics. The jitter accelerations include slew motion, spinning motion, atmospheric drag on the spacecraft, spacecraft attitude motions arising from machinery vibrations, thruster firing, pointing control of spacecraft, crew motion, etc. Explicit mathematical expressions to cover these forces acting on the spacecraft fluid systems are derived. The numerical computation of sloshing dynamics has been based on the non-inertia frame spacecraft bound coordinate, and solve time-dependent, three-dimensional formulations of partial differential equations subject to initial and boundary conditions. The explicit mathematical expressions of boundary conditions to cover capillary force effect on the liquid vapor interface in microgravity environments are also derived. The formulations of fluid moment and angular moment fluctuations in fluid profiles induced by the sloshing dynamics, together with fluid stress and moment fluctuations exerted on the spacecraft dewar containers, have been derived.
California and the "Common Core": Will There Be a New Debate about K-12 Standards?
ERIC Educational Resources Information Center
EdSource, 2010
2010-01-01
A growing chorus of state and federal policymakers, large foundations, and business leaders across the country are calling for states to adopt a common, rigorous body of college- and career-ready skills and knowledge in English and mathematics that all K-12 students will be expected to master by the time they graduate. This report looks at the…
ERIC Educational Resources Information Center
Kushman, Jim; Hanita, Makoto; Raphael, Jacqueline
2011-01-01
Students entering high school face many new academic challenges. One of the most important is their ability to read and understand more complex text in literature, mathematics, science, and social studies courses as they navigate through a rigorous high school curriculum. The Regional Educational Laboratory (REL) Northwest conducted a study to…
Mathematical modeling of the aerodynamic characteristics in flight dynamics
NASA Technical Reports Server (NTRS)
Tobak, M.; Chapman, G. T.; Schiff, L. B.
1984-01-01
Basic concepts involved in the mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers are reviewed. The original formulation of an aerodynamic response in terms of nonlinear functionals is shown to be compatible with a derivation based on the use of nonlinear functional expansions. Extensions of the analysis through its natural connection with ideas from bifurcation theory are indicated.
ERIC Educational Resources Information Center
Adani, Anthony; Eskay, Michael; Onu, Victoria
2012-01-01
This quasi-experimental study examined the effect of self-instruction strategy on the achievement in algebra of students with learning difficulty in mathematics. Two research questions and one null hypothesis were formulated to guide the study. The study adopted a non-randomized pre-test and post-test control group design with one experimental…
ERIC Educational Resources Information Center
Kondratieva, Margo; Winsløw, Carl
2018-01-01
We present a theoretical approach to the problem of the transition from Calculus to Analysis within the undergraduate mathematics curriculum. First, we formulate this problem using the anthropological theory of the didactic, in particular the notion of praxeology, along with a possible solution related to Klein's "Plan B": here,…
Equilibrium Fluid Interface Behavior Under Low- and Zero-Gravity Conditions. 2
NASA Technical Reports Server (NTRS)
Concus, Paul; Finn, Robert
1996-01-01
The mathematical basis for the forthcoming Angular Liquid Bridge investigation on board Mir is described. Our mathematical work is based on the classical Young-Laplace-Gauss formulation for an equilibrium free surface of liquid partly filling a container or otherwise in contact with solid support surfaces. The anticipated liquid behavior used in the apparatus design is also illustrated.
ERIC Educational Resources Information Center
Alordiah, Caroline Ochuko; Akpadaka, Grace; Oviogbodu, Christy Oritseweyimi
2015-01-01
The study investigated the influence of gender, school location, and socio-economic status (SES) on students' academic achievement in mathematics. The study was an ex-post factor design in which the variables were not manipulated nor controlled. Four research questions and three hypotheses were formulated to guide the study. The stratified random…
Suñé-Negre, Josep M; Pérez-Lozano, Pilar; Miñarro, Montserrat; Roig, Manel; Fuster, Roser; Hernández, Carmen; Ruhí, Ramon; García-Montoya, Encarna; Ticó, Josep R
2008-08-01
Application of the new SeDeM Method is proposed for the study of the galenic properties of excipients in terms of the applicability of direct-compression technology. Through experimental studies of the parameters of the SeDeM Method and their subsequent mathematical treatment and graphical expression (SeDeM Diagram), six different DC diluents were analysed to determine whether they were suitable for direct compression (DC). Based on the properties of these diluents, a mathematical equation was established to identify the best DC diluent and the optimum amount to be used when defining a suitable formula for direct compression, depending on the SeDeM properties of the active pharmaceutical ingredient (API) to be used. The results obtained confirm that the SeDeM Method is an appropriate system, effective tool for determining a viable formulation for tablets prepared by direct compression, and can thus be used as the basis for the relevant pharmaceutical development.
Numerical Modeling of Saturated Boiling in a Heated Tube
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Hartwig, Jason
2017-01-01
This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.
Multidimensional Methods for the Formulation of Biopharmaceuticals and Vaccines
Maddux, Nathaniel R.; Joshi, Sangeeta B.; Volkin, David B.; Ralston, John P.; Middaugh, C. Russell
2013-01-01
Determining and preserving the higher order structural integrity and conformational stability of proteins, plasmid DNA and macromolecular complexes such as viruses, virus-like particles and adjuvanted antigens is often a significant barrier to the successful stabilization and formulation of biopharmaceutical drugs and vaccines. These properties typically must be investigated with multiple lower resolution experimental methods, since each technique monitors only a narrow aspect of the overall conformational state of a macromolecular system. This review describes the use of empirical phase diagrams (EPDs) to combine large amounts of data from multiple high-throughput instruments and construct a map of a target macromolecule's physical state as a function of temperature, solvent conditions, and other stress variables. We present a tutorial on the mathematical methodology, an overview of some of the experimental methods typically used, and examples of some of the previous major formulation applications. We also explore novel applications of EPDs including potential new mathematical approaches as well as possible new biopharmaceutical applications such as analytical comparability, chemical stability, and protein dynamics. PMID:21647886
The Bean model in suprconductivity: Variational formulation and numerical solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prigozhin, L.
The Bean critical-state model describes the penetration of magnetic field into type-II superconductors. Mathematically, this is a free boundary problem and its solution is of interest in applied superconductivity. We derive a variational formulation for the Bean model and use it to solve two-dimensional and axially symmetric critical-state problems numerically. 25 refs., 9 figs., 1 tab.
A mathematical model for interpreting in vitro rhGH release from laminar implants.
Santoveña, A; García, J T; Oliva, A; Llabrés, M; Fariña, J B
2006-02-17
Recombinant human growth hormone (rhGH), used mainly for the treatment of growth hormone deficiency in children, requires daily subcutaneous injections. The use of controlled release formulations with appropriate rhGH release kinetics reduces the frequency of medication, improving patient compliance and quality of life. Biodegradable implants are a valid alternative, offering the feasibility of a regular release rate after administering a single dose, though it exists the slight disadvantage of a very minor surgical operation. Three laminar implant formulations (F(1), F(2) and F(3)) were produced by different manufacture procedures using solvent-casting techniques with the same copoly(D,L-lactic) glycolic acid (PLGA) polymer (Mw=48 kDa). A correlation in vitro between polymer matrix degradation and drug release rate from these formulations was found and a mathematical model was developed to interpret this. This model was applied to each formulation. The obtained results where explained in terms of manufacture parameters with the aim of elucidate whether drug release only occurs by diffusion or erosion, or by a combination of both mechanisms. Controlling the manufacture method and the resultant changes in polymer structure facilitates a suitable rhGH release profile for different rhGH deficiency treatments.
NASA Astrophysics Data System (ADS)
Quinn, J. D.; Reed, P. M.; Giuliani, M.; Castelletti, A.
2017-08-01
Managing water resources systems requires coordinated operation of system infrastructure to mitigate the impacts of hydrologic extremes while balancing conflicting multisectoral demands. Traditionally, recommended management strategies are derived by optimizing system operations under a single problem framing that is assumed to accurately represent the system objectives, tacitly ignoring the myriad of effects that could arise from simplifications and mathematical assumptions made when formulating the problem. This study illustrates the benefits of a rival framings framework in which analysts instead interrogate multiple competing hypotheses of how complex water management problems should be formulated. Analyzing rival framings helps discover unintended consequences resulting from inherent biases of alternative problem formulations. We illustrate this on the monsoonal Red River basin in Vietnam by optimizing operations of the system's four largest reservoirs under several different multiobjective problem framings. In each rival framing, we specify different quantitative representations of the system's objectives related to hydropower production, agricultural water supply, and flood protection of the capital city of Hanoi. We find that some formulations result in counterintuitive behavior. In particular, policies designed to minimize expected flood damages inadvertently increase the risk of catastrophic flood events in favor of hydropower production, while min-max objectives commonly used in robust optimization provide poor representations of system tradeoffs due to their instability. This study highlights the importance of carefully formulating and evaluating alternative mathematical abstractions of stakeholder objectives describing the multisectoral water demands and risks associated with hydrologic extremes.
Rigorous diffraction analysis using geometrical theory of diffraction for future mask technology
NASA Astrophysics Data System (ADS)
Chua, Gek S.; Tay, Cho J.; Quan, Chenggen; Lin, Qunying
2004-05-01
Advanced lithographic techniques such as phase shift masks (PSM) and optical proximity correction (OPC) result in a more complex mask design and technology. In contrast to the binary masks, which have only transparent and nontransparent regions, phase shift masks also take into consideration transparent features with a different optical thickness and a modified phase of the transmitted light. PSM are well-known to show prominent diffraction effects, which cannot be described by the assumption of an infinitely thin mask (Kirchhoff approach) that is used in many commercial photolithography simulators. A correct prediction of sidelobe printability, process windows and linearity of OPC masks require the application of rigorous diffraction theory. The problem of aerial image intensity imbalance through focus with alternating Phase Shift Masks (altPSMs) is performed and compared between a time-domain finite-difference (TDFD) algorithm (TEMPEST) and Geometrical theory of diffraction (GTD). Using GTD, with the solution to the canonical problems, we obtained a relationship between the edge on the mask and the disturbance in image space. The main interest is to develop useful formulations that can be readily applied to solve rigorous diffraction for future mask technology. Analysis of rigorous diffraction effects for altPSMs using GTD approach will be discussed.
Mathematic modeling of the Earth's surface and the process of remote sensing
NASA Technical Reports Server (NTRS)
Balter, B. M.
1979-01-01
It is shown that real data from remote sensing of the Earth from outer space are not best suited to the search for optimal procedures with which to process such data. To work out the procedures, it was proposed that data synthesized with the help of mathematical modeling be used. A criterion for simularity to reality was formulated. The basic principles for constructing methods for modeling the data from remote sensing are recommended. A concrete method is formulated for modeling a complete cycle of radiation transformations in remote sensing. A computer program is described which realizes the proposed method. Some results from calculations are presented which show that the method satisfies the requirements imposed on it.
Aeroelastic analysis for propellers - mathematical formulations and program user's manual
NASA Technical Reports Server (NTRS)
Bielawa, R. L.; Johnson, S. A.; Chi, R. M.; Gangwani, S. T.
1983-01-01
Mathematical development is presented for a specialized propeller dedicated version of the G400 rotor aeroelastic analysis. The G400PROP analysis simulates aeroelastic characteristics particular to propellers such as structural sweep, aerodynamic sweep and high subsonic unsteady airloads (both stalled and unstalled). Formulations are presented for these expanded propeller related methodologies. Results of limited application of the analysis to realistic blade configurations and operating conditions which include stable and unstable stall flutter test conditions are given. Sections included for enhanced program user efficiency and expanded utilization include descriptions of: (1) the structuring of the G400PROP FORTRAN coding; (2) the required input data; and (3) the output results. General information to facilitate operation and improve efficiency is also provided.
NASA Technical Reports Server (NTRS)
Sharma, Naveen
1992-01-01
In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.
NASA Astrophysics Data System (ADS)
LeBeau, Brandon; Harwell, Michael; Monson, Debra; Dupuis, Danielle; Medhanie, Amanuel; Post, Thomas R.
2012-04-01
Background: The importance of increasing the number of US college students completing degrees in science, technology, engineering or mathematics (STEM) has prompted calls for research to provide a better understanding of factors related to student participation in these majors, including the impact of a student's high-school mathematics curriculum. Purpose: This study examines the relationship between various student and high-school characteristics and completion of a STEM major in college. Of specific interest is the influence of a student's high-school mathematics curriculum on the completion of a STEM major in college. Sample: The sample consisted of approximately 3500 students from 229 high schools. Students were predominantly Caucasian (80%), with slightly more males than females (52% vs 48%). Design and method: A quasi-experimental design with archival data was used for students who enrolled in, and graduated from, a post-secondary institution in the upper Midwest. To be included in the sample, students needed to have completed at least three years of high-school mathematics. A generalized linear mixed model was used with students nested within high schools. The data were cross-sectional. Results: High-school predictors were not found to have a significant impact on the completion of a STEM major. Significant student-level predictors included ACT mathematics score, gender and high-school mathematics GPA. Conclusions: The results provide evidence that on average students are equally prepared for the rigorous mathematics coursework regardless of the high-school mathematics curriculum they completed.
Efficiency and formalism of quantum games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.F.; Johnson, Neil F.
We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.
ERIC Educational Resources Information Center
Yost, Megan R.; Smith, Laura A.
2012-01-01
Clinicians rigorously study diseases and disorders so that they can formulate the best set of criteria for diagnosis. However, it is often the case, particularly on a college campus, that a friend would notice changes in physical or mental wellness long before a doctor or psychologist would. Because of this, research on the accuracy of lay…
ERIC Educational Resources Information Center
O'Farrell, Timothy J.; Cutter, Henry S. G.
After describing a social learning formulation of the male alcoholic's marriage, this paper reviews the few studies of behavioral marital therapy (BMT) for alcoholics and their wives. Although none of these studies are as rigorous as one might wish and many of them are merely case studies, a review of the literature shows that behavioral marital…
Mathematical modeling of urea transport in the kidney.
Layton, Anita T
2014-01-01
Mathematical modeling techniques have been useful in providing insights into biological systems, including the kidney. This article considers some of the mathematical models that concern urea transport in the kidney. Modeling simulations have been conducted to investigate, in the context of urea cycling and urine concentration, the effects of hypothetical active urea secretion into pars recta. Simulation results suggest that active urea secretion induces a "urea-selective" improvement in urine concentrating ability. Mathematical models have also been built to study the implications of the highly structured organization of tubules and vessels in the renal medulla on urea sequestration and cycling. The goal of this article is to show how physiological problems can be formulated and studied mathematically, and how such models may provide insights into renal functions.
Formulation of image quality prediction criteria for the Viking lander camera
NASA Technical Reports Server (NTRS)
Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.
1973-01-01
Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.
Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces
NASA Astrophysics Data System (ADS)
Lang, Hans-Dieter; Sarris, Costas D.
2017-10-01
This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.
Many-body formulation of carriers capture time in quantum dots applicable in device simulation codes
NASA Astrophysics Data System (ADS)
Vallone, Marco
2010-03-01
We present an application of Green's functions formalism to calculate in a simplified but rigorous way electrons and holes capture time in quantum dots in closed form as function of carrier density, levels confinement potential, and temperature. Carrier-carrier (Auger) scattering and single LO-phonon emission are both addressed accounting for dynamic effects of the potential screening in the single plasmon pole approximation of the dielectric function. Regarding the LO-phonons interaction, the formulation evidences the role of the dynamic screening from wetting-layer carriers in comparison with its static limit, describes the interplay between screening and Fermi band filling, and offers simple expressions for capture time, suitable for modeling implementation.
Rost, Christina M.; Sachet, Edward; Borman, Trent; Moballegh, Ali; Dickey, Elizabeth C.; Hou, Dong; Jones, Jacob L.; Curtarolo, Stefano; Maria, Jon-Paul
2015-01-01
Configurational disorder can be compositionally engineered into mixed oxide by populating a single sublattice with many distinct cations. The formulations promote novel and entropy-stabilized forms of crystalline matter where metal cations are incorporated in new ways. Here, through rigorous experiments, a simple thermodynamic model, and a five-component oxide formulation, we demonstrate beyond reasonable doubt that entropy predominates the thermodynamic landscape, and drives a reversible solid-state transformation between a multiphase and single-phase state. In the latter, cation distributions are proven to be random and homogeneous. The findings validate the hypothesis that deliberate configurational disorder provides an orthogonal strategy to imagine and discover new phases of crystalline matter and untapped opportunities for property engineering. PMID:26415623
Putting problem formulation at the forefront of GMO risk analysis.
Tepfer, Mark; Racovita, Monica; Craig, Wendy
2013-01-01
When applying risk assessment and the broader process of risk analysis to decisions regarding the dissemination of genetically modified organisms (GMOs), the process has a tendency to become remarkably complex. Further, as greater numbers of countries consider authorising the large-scale dissemination of GMOs, and as GMOs with more complex traits reach late stages of development, there has been increasing concern about the burden posed by the complexity of risk analysis. We present here an improved approach for GMO risk analysis that gives a central role to problem formulation. Further, the risk analysis strategy has been clarified and simplified in order to make rigorously scientific risk assessment and risk analysis more broadly accessible to diverse stakeholder groups.
Shivakumar, Hagalavadi Nanjappa; Patel, Pragnesh Bharat; Desai, Bapusaheb Gangadhar; Ashok, Purnima; Arulmozhi, Sinnathambi
2007-09-01
A 32 factorial design was employed to produce glipizide lipospheres by the emulsification phase separation technique using paraffin wax and stearic acid as retardants. The effect of critical formulation variables, namely levels of paraffin wax (X1) and proportion of stearic acid in the wax (X2) on geometric mean diameter (dg), percent encapsulation efficiency (% EE), release at the end of 12 h (rel12) and time taken for 50% of drug release (t50), were evaluated using the F-test. Mathematical models containing only the significant terms were generated for each response parameter using the multiple linear regression analysis (MLRA) and analysis of variance (ANOVA). Both formulation variables studied exerted a significant influence (p < 0.05) on the response parameters. Numerical optimization using the desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of dg, % EE, rel12 and t50 values for the optimized formulation were found to be 57.54 +/- 1.38 mum, 86.28 +/- 1.32%, 77.23 +/- 2.78% and 5.60 +/- 0.32 h, respectively, which were in close agreement with those predicted by the mathematical models. The drug release from lipospheres followed first-order kinetics and was characterized by the Higuchi diffusion model. The optimized liposphere formulation developed was found to produce sustained anti-diabetic activity following oral administration in rats.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
A Transformative Model for Undergraduate Quantitative Biology Education
Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949
ERIC Educational Resources Information Center
Guner, Necdet
2013-01-01
This study examines and classifies the metaphors that twelfth grade students formulated to describe the concept of "learning mathematics". The sample of the study consists of 669 twelfth grade students (317 female, 352 male) of two Anatolian and two vocational high schools located in the city center of Denizli. The following questions…
Developing a Theoretical Framework for Classifying Levels of Context Use for Mathematical Problems
ERIC Educational Resources Information Center
Almuna Salgado, Felipe
2016-01-01
This paper aims to revisit and clarify the term problem context and to develop a theoretical classification of the construct of levels of context use (LCU) to analyse how the context of a problem is used to formulate a problem in mathematical terms and to interpret the answer in relation to the context of a given problem. Two criteria and six…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodruff, David; Hackebeil, Gabe; Laird, Carl Damon
Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with algebraic modeling languages (AMLs), which support the description and analysis of mathematical models with a high-level language. Although most AMLs are implemented in custom modeling languages, Pyomo's modeling objects are embedded within Python, a full- featured high-level programming language that contains a rich set of supporting libraries.
ERIC Educational Resources Information Center
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo
2012-01-01
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Statistical hydrodynamics and related problems in spaces of probability measures
NASA Astrophysics Data System (ADS)
Dostoglou, Stamatios
2017-11-01
A rigorous theory of statistical solutions of the Navier-Stokes equations, suitable for exploring Kolmogorov's ideas, has been developed by M.I. Vishik and A.V. Fursikov, culminating in their monograph "Mathematical problems of Statistical Hydromechanics." We review some progress made in recent years following this approach, with emphasis on problems concerning the correlation of velocities and corresponding questions in the space of probability measures on Hilbert spaces.
ACM TOMS replicated computational results initiative
Heroux, Michael Allen
2015-06-03
In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.
Dóka, Éva; Lente, Gábor
2017-04-13
This work presents a rigorous mathematical study of the effect of unavoidable inhomogeneities in laser flash photolysis experiments. There are two different kinds of inhomegenities: the first arises from diffusion, whereas the second one has geometric origins (the shapes of the excitation and detection light beams). Both of these are taken into account in our reported model, which gives rise to a set of reaction-diffusion type partial differential equations. These equations are solved by a specially developed finite volume method. As an example, the aqueous reaction between the sulfate ion radical and iodide ion is used, for which sufficiently detailed experimental data are available from an earlier publication. The results showed that diffusion itself is in general too slow to influence the kinetic curves on the usual time scales of laser flash photolysis experiments. However, the use of the absorbances measured (e.g., to calculate the molar absorption coefficients of transient species) requires very detailed mathematical consideration and full knowledge of the geometrical shapes of the excitation laser beam and the separate detection light beam. It is also noted that the usual pseudo-first-order approach to evaluating the kinetic traces can be used successfully even if the usual large excess condition is not rigorously met in the reaction cell locally.
Cartan gravity, matter fields, and the gauge principle
NASA Astrophysics Data System (ADS)
Westman, Hans F.; Zlosnik, Tom G.
2013-07-01
Gravity is commonly thought of as one of the four force fields in nature. However, in standard formulations its mathematical structure is rather different from the Yang-Mills fields of particle physics that govern the electromagnetic, weak, and strong interactions. This paper explores this dissonance with particular focus on how gravity couples to matter from the perspective of the Cartan-geometric formulation of gravity. There the gravitational field is represented by a pair of variables: (1) a 'contact vector' VA which is geometrically visualized as the contact point between the spacetime manifold and a model spacetime being 'rolled' on top of it, and (2) a gauge connection AμAB, here taken to be valued in the Lie algebra of SO(2,3) or SO(1,4), which mathematically determines how much the model spacetime is rotated when rolled. By insisting on two principles, the gauge principle and polynomial simplicity, we shall show how one can reformulate matter field actions in a way that is harmonious with Cartan's geometric construction. This yields a formulation of all matter fields in terms of first order partial differential equations. We show in detail how the standard second order formulation can be recovered. In particular, the Hodge dual, which characterizes the structure of bosonic field equations, pops up automatically. Furthermore, the energy-momentum and spin-density three-forms are naturally combined into a single object here denoted the spin-energy-momentum three-form. Finally, we highlight a peculiarity in the mathematical structure of our first-order formulation of Yang-Mills fields. This suggests a way to unify a U(1) gauge field with gravity into a SO(1,5)-valued gauge field using a natural generalization of Cartan geometry in which the larger symmetry group is spontaneously broken down to SO(1,3)×U(1). The coupling of this unified theory to matter fields and possible extensions to non-Abelian gauge fields are left as open questions.
A Hilly path through the thermodynamics and statistical mechanics of protein solutions.
Wills, Peter R
2016-12-01
The opus of Don Winzor in the fields of physical and analytical biochemistry is a major component of that certain antipodean approach to this broad area of research that blossomed in the second half of the twentieth century. The need to formulate problems in terms of thermodynamic nonideality posed the challenge of describing a clear route from molecular interactions to the parameters that biochemists routinely measure. Mapping out this route required delving into the statistical mechanics of solutions of macromolecules, and at every turn mathematically complex, rigorous, general results that had previously been derived previously, often by Terrell Hill, came to the fore. Central to this work were the definition of the "thermodynamic activity", the pivotal position of the polynomial expansion of the osmotic pressure in terms of molar concentration and the relationship of virial coefficients to details of the forces between limited-size groups of interacting molecules. All of this was richly exploited in the task of taking account of excluded volume and electrostatic interactions, especially in the use of sedimentation equilibrium to determine values of constants for molecular association reactions. Such an approach has proved relevant to the study of molecular interactions generally, even those between the main macromolecular solute and components of the solvent, by using techniques such as exclusion and affinity chromatography as well as light scattering.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-05-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-02-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various time scales is discussed.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.
Meagher, Alison K.; Forrest, Alan; Dalhoff, Axel; Stass, Heino; Schentag, Jerome J.
2004-01-01
The pharmacokinetics of an extended-release (XR) formulation of ciprofloxacin has been compared to that of the immediate-release (IR) product in healthy volunteers. The only significant difference in pharmacokinetic parameters between the two formulations was seen in the rate constant of absorption, which was approximately 50% greater with the IR formulation. The geometric mean plasma ciprofloxacin concentrations were applied to an in vitro pharmacokinetic-pharmacodynamic model exposing three different clinical strains of Escherichia coli (MICs, 0.03, 0.5, and 2.0 mg/liter) to 24 h of simulated concentrations in plasma. A novel mathematical model was derived to describe the time course of bacterial CFU, including capacity-limited replication and first-order rate of bacterial clearance, and to model the effects of ciprofloxacin concentrations on these processes. A “mixture model” was employed which allowed as many as three bacterial subpopulations to describe the total bacterial load at any moment. Comparing the two formulations at equivalent daily doses, the rates and extents of bacterial killing were similar with the IR and XR formulations at MICs of 0.03 and 2.0 mg/liter. At an MIC of 0.5 mg/liter, however, the 1,000-mg/day XR formulation showed a moderate advantage in antibacterial effect: the area under the CFU-time curve was 45% higher for the IR regimen; the nadir log CFU and 24-h log CFU values for the IR regimen were 3.75 and 2.49, respectively; and those for XR were 4.54 and 3.13, respectively. The mathematical model explained the differences in bacterial killing rate for two regimens with identical AUC/MIC ratios. PMID:15155200
Mathematical models for principles of gyroscope theory
NASA Astrophysics Data System (ADS)
Usubamatov, Ryspek
2017-01-01
Gyroscope devices are primary units for navigation and control systems that have wide application in engineering. The main property of the gyroscope device is maintaining the axis of a spinning rotor. This gyroscope peculiarity is represented in terms of gyroscope effects in which known mathematical models have been formulated on the law of kinetic energy conservation and the change in the angular momentum. The gyroscope theory is represented by numerous publications, which mathematical models do not match the actual torques and motions in these devices.. The nature of gyroscope effects is more complex than represented in known publications. Recent investigations in this area have demonstrated that on a gyroscope can act until eleven internal torques simultaneously and interdependently around two axes. These gyroscope torques are generated by spinning rotor's mass-elements and by the gyroscope center-mass based on action of several inertial forces. The change in the angular momentum does not play first role for gyroscope motions. The external load generates several internal torques which directions may be distinguished. This situation leads changing of the angular velocities of gyroscope motions around two axes. Formulated mathematical models of gyroscope internal torques are representing the fundamental principle of gyroscope theory. In detail, the gyroscope is experienced the resistance torque generated by the centrifugal and Coriolis forces of the spinning rotor and the precession torque generated by the common inertial forces and the change in the angular momentum. The new mathematical models for the torques and motions of the gyroscope confirmed for most unsolvable problems. The mathematical models practically tested and the results are validated the theoretical approach.
Two Novel Methods and Multi-Mode Periodic Solutions for the Fermi-Pasta-Ulam Model
NASA Astrophysics Data System (ADS)
Arioli, Gianni; Koch, Hans; Terracini, Susanna
2005-04-01
We introduce two novel methods for studying periodic solutions of the FPU β-model, both numerically and rigorously. One is a variational approach, based on the dual formulation of the problem, and the other involves computer-assisted proofs. These methods are used e.g. to construct a new type of solutions, whose energy is spread among several modes, associated with closely spaced resonances.
Gravitational Physics: the birth of a new era
NASA Astrophysics Data System (ADS)
Sakellariadou, Mairi
2017-11-01
We live the golden age of cosmology, while the era of gravitational astronomy has finally begun. Still, fundamental puzzles remain. Standard cosmology is formulated within the framework of Einstein's General theory of Relativity. Notwithstanding, General Relativity is not adequate to explain the earliest stages of cosmic existence, and cannot provide an explanation for the Big Bang itself. Modern early universe cosmology is in need of a rigorous underpinning in Quantum Gravity.
Identifying the mathematics middle year students use as they address a community issue
NASA Astrophysics Data System (ADS)
Marshman, Margaret
2017-03-01
Middle year students often do not see the mathematics in the real world whereas the Australian Curriculum: Mathematics aims for students to be "confident and creative users and communicators of mathematics" (Australian Curriculum Assessment and Reporting Authority [ACARA] 2012). Using authentic and real mathematics tasks can address this situation. This paper is an account of how, working within a Knowledge Producing Schools' framework, a group of middle year students addressed a real community issue, the problem of the lack of a teenage safe space using mathematics and technology. Data were collected for this case study via journal observations and reflections, semi-structured interviews, samples of the students' work and videos of students working. The data were analysed by identifying the mathematics the students used determining the function and location of the space and focused on problem negotiation, formulation and solving through the statistical investigation cycle. The paper will identify the mathematics and statistics these students used as they addressed a real problem in their local community.
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
Statistical Analysis of Protein Ensembles
NASA Astrophysics Data System (ADS)
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Spatial-Operator Algebra For Robotic Manipulators
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Milman, Mark H.
1991-01-01
Report discusses spatial-operator algebra developed in recent studies of mathematical modeling, control, and design of trajectories of robotic manipulators. Provides succinct representation of mathematically complicated interactions among multiple joints and links of manipulator, thereby relieving analyst of most of tedium of detailed algebraic manipulations. Presents analytical formulation of spatial-operator algebra, describes some specific applications, summarizes current research, and discusses implementation of spatial-operator algebra in the Ada programming language.
ERIC Educational Resources Information Center
Dodd, Carol Ann
This study explores a technique for evaluating teacher education programs in terms of teaching competencies, as applied to the Indiana University Mathematics Methods Program (MMP). The evaluation procedures formulated for the study include a process product design in combination with a modification of Pophan's performance test paradigm and Gage's…
NASA Astrophysics Data System (ADS)
Wardono; Mariani, S.; Hendikawati, P.; Ikayani
2017-04-01
Mathematizing process (MP) is the process of modeling a phenomenon mathematically or establish the concept of a phenomenon. There are two mathematizing that is Mathematizing Horizontal (MH) and Mathematizing Vertical (MV). MH as events changes contextual problems into mathematical problems, while MV is the process of formulation of the problem into a variety of settlement mathematics by using some appropriate rules. Mathematics Literacy (ML) is the ability to formulate, implement and interpret mathematics in various contexts, including the capacity to perform reasoning mathematically and using the concepts, procedures, and facts to describe, explain or predict phenomena incident. If junior high school students are conditioned continuously to conduct mathematizing activities on RCP (RME-Card Problem) learning, it will be able to improve ML that refers PISA. The purpose of this research is to know the capability of the MP grade VIII on ML content shape and space with the matter of the cube and beams with RCP learning better than the scientific learning, upgrade MP grade VIII in the issue of the cube and beams with RCP learning better than the scientific learning in terms of cognitive styles reflective and impulsive the MP grade VIII with the approach of the RCP learning in terms of cognitive styles reflective and impulsive This research is the mixed methods model concurrent embedded. The population in this study, i.e., class VIII SMPN 1 Batang with sample two class. Data were taken with the observation, interviews, and tests and analyzed with a different test average of one party the right qualitative and descriptive. The results of this study demonstrate the capability of the MP student with RCP learning better than the scientific learning, upgrade MP with RCP learning better compare with scientific learning in term cognitive style of reflective and impulsive. The subject of the reflective group top, middle, and bottom can meet all the process of MH indicators are then the subject of the reflective upper and intermediate group can meet all the MV indicators but to lower groups can only fulfill some MV indicators. The subject is impulsive upper and middle group can meet all the MH indicators but to lower groups can only meet some MH indicator, then the subject is impulsive group can meet all the MV indicators but for middle and the bottom group can only fulfill some MV indicators.
pyomocontrib_simplemodel v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William
2017-03-02
Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This library extends the API of Pyomo to include a simple modeling representation: a list of objectives and constraints.
Gentis, Nicolaos D; Betz, Gabriele
2012-02-01
The purpose of this work was to investigate and evaluate the powder compressibility of binary mixtures containing a well-compressible compound (microcrystalline cellulose) and a brittle active drug (paracetamol and mefenamic acid) and its progression after a drug load increase. Drug concentration range was 0%-100% (m/m) with 10% intervals. The powder formulations were compacted to several relative densities with the Zwick material tester. The compaction force and tensile strength were fitted to several mathematical models that give representative factors for the powder compressibility. The factors k and C (Heckel and modified Heckel equation) showed mostly a nonlinear correlation with increasing drug load. The biggest drop in both factors occurred at far regions and drug load ranges. This outcome is crucial because in binary mixtures the drug load regions with higher changeover of plotted factors could be a hint for an existing percolation threshold. The susceptibility value (Leuenberger equation) showed varying values for each formulation without the expected trend of decrease for higher drug loads. The outcomes of this study showed the main challenges for good formulation design. Thus, we conclude that such mathematical plots are mandatory for a scientific evaluation and prediction of the powder compaction process. Copyright © 2011 Wiley Periodicals, Inc.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
The challenge of computer mathematics.
Barendregt, Henk; Wiedijk, Freek
2005-10-15
Progress in the foundations of mathematics has made it possible to formulate all thinkable mathematical concepts, algorithms and proofs in one language and in an impeccable way. This is not in spite of, but partially based on the famous results of Gödel and Turing. In this way statements are about mathematical objects and algorithms, proofs show the correctness of statements and computations, and computations are dealing with objects and proofs. Interactive computer systems for a full integration of defining, computing and proving are based on this. The human defines concepts, constructs algorithms and provides proofs, while the machine checks that the definitions are well formed and the proofs and computations are correct. Results formalized so far demonstrate the feasibility of this 'computer mathematics'. Also there are very good applications. The challenge is to make the systems more mathematician-friendly, by building libraries and tools. The eventual goal is to help humans to learn, develop, communicate, referee and apply mathematics.
ERIC Educational Resources Information Center
Neri, Rebecca; Lozano, Maritza; Chang, Sandy; Herman, Joan
2016-01-01
New college and career ready standards (CCRS) have established more rigorous expectations of learning for all learners, including English learner (EL) students, than what was expected in previous standards. A common feature in these new content-area standards, such as the Common Core State Standards in English language arts and mathematics and the…
Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.
1986-09-01
respectively. Here h is a parameter which is usually related to the size of the grid associated with the finite element partitioning of Q. Then one... grid and of not at least performing serious mesh refinement studies. It also points out the usefulness of rigorous results concerning the stability...overconstrained the .1% approximate velocity field. However, by employing different grids for the ’z pressure and velocity fields, the linear-constant
Advanced Extremely High Frequency Satellite (AEHF)
2015-12-01
control their tactical and strategic forces at all levels of conflict up to and including general nuclear war, and it supports the attainment of...10195.1 10622.2 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE) that supports the AEHF SV 1-4, like all life-cycle cost...mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in methods used in building
2015-12-01
system level testing. The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
All biology is computational biology.
Markowetz, Florian
2017-03-01
Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.
Influence of gas compressibility on a burning accident in a mining passage
NASA Astrophysics Data System (ADS)
Demir, Sinan; Calavay, Anish Raman; Akkerman, V'yacheslav
2018-03-01
A recent predictive scenario of a methane/air/coal dust fire in a mining passage is extended by incorporating the effect of gas compressibility into the analysis. The compressible and incompressible formulations are compared, qualitatively and quantitatively, in both the two-dimensional planar and cylindrical-axisymmetric geometries, and a detailed parametric study accounting for coal-dust combustion is performed. It is shown that gas compression moderates flame acceleration, and its impact depends on the type of the fuel, its various thermal-chemical parameters as well as on the geometry of the problem. While the effect of gas compression is relatively minor for the lean and rich flames, providing 5-25% reduction in the burning velocity and thereby justifying the incompressible formulation in that case, such a reduction appears significant, up to 70% for near-stoichiometric methane-air combustion, and therefore it should be incorporated into a rigorous formulation. It is demonstrated that the flame tip velocity remains noticeably subsonic in all the cases considered, which is opposite to the prediction of the incompressible formulation, but qualitatively agrees with the experimental predictions from the literature.
Spillover, nonlinearity, and flexible structures
NASA Technical Reports Server (NTRS)
Bass, Robert W.; Zes, Dean
1991-01-01
Many systems whose evolution in time is governed by Partial Differential Equations (PDEs) are linearized around a known equilibrium before Computer Aided Control Engineering (CACE) is considered. In this case, there are infinitely many independent vibrational modes, and it is intuitively evident on physical grounds that infinitely many actuators would be needed in order to control all modes. A more precise, general formulation of this grave difficulty (spillover problem) is due to A.V. Balakrishnan. A possible route to circumvention of this difficulty lies in leaving the PDE in its original nonlinear form, and adding the essentially finite dimensional control action prior to linearization. One possibly applicable technique is the Liapunov Schmidt rigorous reduction of singular infinite dimensional implicit function problems to finite dimensional implicit function problems. Omitting details of Banach space rigor, the formalities of this approach are given.
Increased scientific rigor will improve reliability of research and effectiveness of management
Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.
2018-01-01
Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and unavoidable human biases. Offering only post hoc interpretations of statistical patterns (i.e., a posteriorihypotheses) adds to uncertainty because it increases the number of plausible biological explanations without determining which have the greatest support. Further, post hocinterpretations are strongly subject to human biases. Testing hypotheses maximizes the credibility of research findings, makes the strongest contributions to theory and management, and improves reproducibility of research. Management decisions based on rigorous research are most likely to result in effective conservation of wildlife resources.
Mathematical modelling in developmental biology.
Vasieva, Olga; Rasolonjanahary, Manan'Iarivo; Vasiev, Bakhtier
2013-06-01
In recent decades, molecular and cellular biology has benefited from numerous fascinating developments in experimental technique, generating an overwhelming amount of data on various biological objects and processes. This, in turn, has led biologists to look for appropriate tools to facilitate systematic analysis of data. Thus, the need for mathematical techniques, which can be used to aid the classification and understanding of this ever-growing body of experimental data, is more profound now than ever before. Mathematical modelling is becoming increasingly integrated into biological studies in general and into developmental biology particularly. This review outlines some achievements of mathematics as applied to developmental biology and demonstrates the mathematical formulation of basic principles driving morphogenesis. We begin by describing a mathematical formalism used to analyse the formation and scaling of morphogen gradients. Then we address a problem of interplay between the dynamics of morphogen gradients and movement of cells, referring to mathematical models of gastrulation in the chick embryo. In the last section, we give an overview of various mathematical models used in the study of the developmental cycle of Dictyostelium discoideum, which is probably the best example of successful mathematical modelling in developmental biology.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten times as long* as the latter. The filter correction step also furnishes a statistically rigorous *prediction error* which appears in the likelihood ratios for scoring the association of one report or observation to another. Thus, the new filter can be used to support multi-target tracking within a general multiple hypothesis tracking framework. Additionally, the new distribution admits a distance metric which extends the classical Mahalanobis distance (chi^2 statistic). This metric provides a test for statistical significance and facilitates single-frame data association methods with the potential to easily extend the covariance-based track association algorithm of Hill, Sabol, and Alfriend. The filtering, data fusion, and association methods using the new class of orbital state PDFs are shown to be mathematically tractable and operationally viable.
NASA Astrophysics Data System (ADS)
Krüger, Thomas
2006-05-01
The possibility of teleportation is by sure the most interesting consequence of quantum non-separability. So far, however, teleportation schemes have been formulated by use of state vectors and considering individual entities only. In the present article the feasibility of teleportation is examined on the basis of the rigorous ensemble interpretation of quantum mechanics (not to be confused with a mere treatment of noisy EPR pairs) leading to results which are unexpected from the usual point of view.
Identification and feedback control in structures with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Wang, Y.
1992-01-01
In this lecture we give fundamental well-posedness results for a variational formulation of a class of damped second order partial differential equations with unbounded input or control coefficients. Included as special cases in this class are structures with piezoceramic actuators. We consider approximation techniques leading to computational methods in the context of both parameter estimation and feedback control problems for these systems. Rigorous convergence results for parameter estimates and feedback gains are discussed.
Mathematical modeling of fluxgate magnetic gradiometers
NASA Astrophysics Data System (ADS)
Milovzorov, D. G.; Yasoveev, V. Kh.
2017-07-01
Issues of designing fluxgate magnetic gradiometers are considered. The areas of application of fluxgate magnetic gradiometers are determined. The structure and layout of a two-component fluxgate magnetic gradiometer are presented. It is assumed that the fluxgates are strictly coaxial in the gradiometer body. Elements of the classical approach to the mathematical modeling of the spatial arrangement of solids are considered. The bases of the gradiometer body and their transformations during spatial displacement of the gradiometer are given. The problems of mathematical modeling of gradiometers are formulated, basic mathematical models of a two-component fluxgate gradiometer are developed, and the mathematical models are analyzed. A computer experiment was performed. Difference signals from the gradiometer fluxgates for the vertical and horizontal position of the gradiometer body are shown graphically as functions of the magnitude and direction of the geomagnetic field strength vector.
Experiments with Corn To Demonstrate Plant Growth and Development.
ERIC Educational Resources Information Center
Haldeman, Janice H.; Gray, Margarit S.
2000-01-01
Explores using corn seeds to demonstrate plant growth and development. This experiment allows students to formulate hypotheses, observe and record information, and practice mathematics. Presents background information, materials, procedures, and observations. (SAH)
Mathematical modeling of tomographic scanning of cylindrically shaped test objects
NASA Astrophysics Data System (ADS)
Kapranov, B. I.; Vavilova, G. V.; Volchkova, A. V.; Kuznetsova, I. S.
2018-05-01
The paper formulates mathematical relationships that describe the length of the radiation absorption band in the test object for the first generation tomographic scan scheme. A cylindrically shaped test object containing an arbitrary number of standard circular irregularities is used to perform mathematical modeling. The obtained mathematical relationships are corrected with respect to chemical composition and density of the test object material. The equations are derived to calculate the resulting attenuation radiation from cobalt-60 isotope when passing through the test object. An algorithm to calculate the radiation flux intensity is provided. The presented graphs describe the dependence of the change in the γ-quantum flux intensity on the change in the radiation source position and the scanning angle of the test object.
NASA Technical Reports Server (NTRS)
Hung, R. J.
1994-01-01
The generalized mathematical formulation of sloshing dynamics for partially filled liquid of cryogenic superfluid helium II in dewar containers driven by the gravity gradient and jitter accelerations associated with slew motion for the purpose to perform scientific observation during the normal spacecraft operation are investigated. An example is given with the Advanced X-Ray Astrophysics Facility-Spectroscopy (AXAF-S) for slew motion which is responsible for the sloshing dynamics. The jitter accelerations include slew motion, spinning motion, atmospheric drag on the spacecraft, spacecraft attitude motions arising from machinery vibrations, thruster firing, pointing control of spacecraft, crew motion, etc. Explicit mathematical expressions to cover these forces acting on the spacecraft fluid systems are derived. The numerical computation of sloshing dynamics is based on the non-inertia frame spacecraft bound coordinate, and solve time-dependent, three-dimensional formulations of partial differential equations subject to initial and boundary conditions. The explicit mathematical expressions of boundary conditions to cover capillary force effect on the liquid-vapor interface in microgravity environments are also derived. The formulations of fluid moment and angular moment fluctuations in fluid profiles induced by the sloshing dynamics, together with fluid stress and moment fluctuations exerted on the spacecraft dewar containers have also been derived. Examples are also given for cases applicable to the AXAF-S spacecraft sloshing dynamics associated with slew motion.
Dynamics of local grid manipulations for internal flow problems
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Snyder, Aaron; Choo, Yung K.
1991-01-01
The control point method of algebraic grid generation is briefly reviewed. The review proceeds from the general statement of the method in 2-D unencumbered by detailed mathematical formulation. The method is supported by an introspective discussion which provides the basis for confidence in the approach. The more complex 3-D formulation is then presented as a natural generalization. Application of the method is carried out through 2-D examples which demonstrate the technique.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
NASA Astrophysics Data System (ADS)
Holmes, Mark H.
2006-10-01
To help students grasp the intimate connections that exist between mathematics and its applications in other disciplines a library of interactive learning modules was developed. This library covers the mathematical areas normally studied by undergraduate students and is used in science courses at all levels. Moreover, the library is designed not just to provide critical connections across disciplines but to also provide longitudinal subject reinforcement as students progress in their studies. In the process of developing the modules a complete editing and publishing system was constructed that is optimized for automated maintenance and upgradeability of materials. The result is a single integrated production system for web-based educational materials. Included in this is a rigorous assessment program, involving both internal and external evaluations of each module. As will be seen, the formative evaluation obtained during the development of the library resulted in the modules successfully bridging multiple disciplines and breaking down the disciplinary barriers commonly found in their math and non-math courses.
Survey of computer programs for prediction of crash response and of its experimental validation
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1976-01-01
The author seeks to critically assess the potentialities of the mathematical and hybrid simulators which predict post-impact response of transportation vehicles. A strict rigorous numerical analysis of a complex phenomenon like crash may leave a lot to be desired with regard to the fidelity of mathematical simulation. Hybrid simulations on the other hand which exploit experimentally observed features of deformations appear to hold a lot of promise. MARC, ANSYS, NONSAP, DYCAST, ACTION, WHAM II and KRASH are among some of the simulators examined for their capabilities with regard to prediction of post impact response of vehicles. A review of these simulators reveals that much more by way of an analysis capability may be desirable than what is currently available. NASA's crashworthiness testing program in conjunction with similar programs of various other agencies, besides generating a large data base, will be equally useful in the validation of new mathematical concepts of nonlinear analysis and in the successful extension of other techniques in crashworthiness.
War of ontology worlds: mathematics, computer code, or Esperanto?
Rzhetsky, Andrey; Evans, James A
2011-09-01
The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
The conceptual basis of mathematics in cardiology: (II). Calculus and differential equations.
Bates, Jason H T; Sobel, Burton E
2003-04-01
This is the second in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas. This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.
Bates, Jason H T; Sobel, Burton E
2003-05-01
This is the third in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas.This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.
The conceptual basis of mathematics in cardiology IV: statistics and model fitting.
Bates, Jason H T; Sobel, Burton E
2003-06-01
This is the fourth in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas. This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.
The conceptual basis of mathematics in cardiology: (I) algebra, functions and graphs.
Bates, Jason H T; Sobel, Burton E
2003-02-01
This is the first in a series of four articles developed for the readers of. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas. This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease, abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.
Adams, Peter; Goos, Merrilyn
2010-01-01
Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate program. Inspired by the National Research Council's BIO2010 report, a new interdisciplinary first-year course (SCIE1000) was created, incorporating mathematics and computer programming in the context of modern science. In this study, the perceptions of biological science students enrolled in SCIE1000 in 2008 and 2009 are measured. Analysis indicates that, as a result of taking SCIE1000, biological science students gained a positive appreciation of the importance of mathematics in their discipline. However, the data revealed that SCIE1000 did not contribute positively to gains in appreciation for computing and only slightly influenced students' motivation to enroll in upper-level quantitative-based courses. Further comparisons between 2008 and 2009 demonstrated the positive effect of using genuine, real-world contexts to enhance student perceptions toward the relevance of mathematics. The results support the recommendation from BIO2010 that mathematics should be introduced to biology students in first-year courses using real-world examples, while challenging the benefits of introducing programming in first-year courses. PMID:20810961
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
A comparison of Fick and Maxwell-Stefan diffusion formulations in PEMFC gas diffusion layers
NASA Astrophysics Data System (ADS)
Lindstrom, Michael; Wetton, Brian
2017-01-01
This paper explores the mathematical formulations of Fick and Maxwell-Stefan diffusion in the context of polymer electrolyte membrane fuel cell cathode gas diffusion layers. The simple Fick law with a diagonal diffusion matrix is an approximation of Maxwell-Stefan. Formulations of diffusion combined with mass-averaged Darcy flow are considered for three component gases. For this application, the formulations can be compared computationally in a simple, one dimensional setting. Despite the models' seemingly different structure, it is observed that the predictions of the formulations are very similar on the cathode when air is used as oxidant. The two formulations give quite different results when the Nitrogen in the air oxidant is replaced by helium (this is often done as a diagnostic for fuel cells designs). The two formulations also give quite different results for the anode with a dilute Hydrogen stream. These results give direction to when Maxwell-Stefan diffusion, which is more complicated to implement computationally in many codes, should be used in fuel cell simulations.
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
On modelling three-dimensional piezoelectric smart structures with boundary spectral element method
NASA Astrophysics Data System (ADS)
Zou, Fangxin; Aliabadi, M. H.
2017-05-01
The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.
2010-10-18
August 2010 was building the right game “ – World of Warcraft has 30% women (according to womengamers.com) Conclusion: – We don’t really understand why...Report of the National Academies on Informal Learning • Infancy - late adulthood: Learn about the world & develop important skills for science...Education With Rigor and Vigor – Excitement, interest, and motivation to learn about phenomena in the natural and physical world . – Generate
A Center of Excellence in the Mathematical Sciences - at Cornell University
1992-03-01
of my recent efforts go in two directions. 1. Cellular Automata. The Greenberg Hastings model is a simple system that models the behavior of an... Greenberg -Hastings Model. We also obtained results concerning the crucial value for a threshold voter model. This resulted in the papers "Some Rigorous...Results for the Greenberg - Hastings Model" and "Fixation Results for Threshold Voter Systems." Together with Scot Adams, I wrote "An Application of the
Are computational models of any use to psychiatry?
Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan
2011-08-01
Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.
A rigorous computational approach to linear response
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Galatolo, Stefano; Nisoli, Isaia; Niu, Xiaolong
2018-03-01
We present a general setting in which the formula describing the linear response of the physical measure of a perturbed system can be obtained. In this general setting we obtain an algorithm to rigorously compute the linear response. We apply our results to expanding circle maps. In particular, we present examples where we compute, up to a pre-specified error in the L∞ -norm, the response of expanding circle maps under stochastic and deterministic perturbations. Moreover, we present an example where we compute, up to a pre-specified error in the L 1-norm, the response of the intermittent family at the boundary; i.e. when the unperturbed system is the doubling map. This work was mainly conducted during a visit of SG to Loughborough University. WB and SG would like to thank The Leverhulme Trust for supporting mutual research visits through the Network Grant IN-2014-021. SG thanks the Department of Mathematical Sciences at Loughborough University for hospitality. WB thanks Dipartimento di Matematica, Universita di Pisa. The research of SG and IN is partially supported by EU Marie-Curie IRSES ‘Brazilian-European partnership in Dynamical Systems’ (FP7-PEOPLE-2012-IRSES 318999 BREUDS). IN was partially supported by CNPq and FAPERJ. IN would like to thank the Department of Mathematics at Uppsala University and the support of the KAW grant 2013.0315.
Weiland, Christina
2016-11-01
Theory and empirical work suggest inclusion preschool improves the school readiness of young children with special needs, but only 2 studies of the model have used rigorous designs that could identify causality. The present study examined the impacts of the Boston Public prekindergarten program-which combined proven language, literacy, and mathematics curricula with coaching-on the language, literacy, mathematics, executive function, and emotional skills of young children with special needs (N = 242). Children with special needs benefitted from the program in all examined domains. Effects were on par with or surpassed those of their typically developing peers. Results are discussed in the context of their relevance for policy, practice, and theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mathematical models for Isoptera (Insecta) mound growth.
Buschini, M L T; Abuabara, M A P; Petrere, Miguel
2008-08-01
In this research we proposed two mathematical models for Isoptera mound growth derived from the Von Bertalanffy growth curve, one appropriated for Nasutitermes coxipoensis, and a more general formulation. The mean height and the mean diameter of ten small colonies were measured each month for twelve months, from April, 1995 to April, 1996. Through these data, the monthly volumes were calculated for each of them. Then the growth in height and in volume was estimated and the models proposed.
Direct integration of the inverse Radon equation for X-ray computed tomography.
Libin, E E; Chakhlov, S V; Trinca, D
2016-11-22
A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
NASA Astrophysics Data System (ADS)
Šprlák, M.; Han, S.-C.; Featherstone, W. E.
2017-12-01
Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.
A three-dimensional meso-macroscopic model for Li-Ion intercalation batteries
Allu, S.; Kalnaus, S.; Simunovic, S.; ...
2016-06-09
Through this study, we present a three-dimensional computational formulation for electrode-electrolyte-electrode system of Li-Ion batteries. The physical consistency between electrical, thermal and chemical equations is enforced at each time increment by driving the residual of the resulting coupled system of nonlinear equations to zero. The formulation utilizes a rigorous volume averaging approach typical of multiphase formulations used in other fields and recently extended to modeling of supercapacitors [1]. Unlike existing battery modeling methods which use segregated solution of conservation equations and idealized geometries, our unified approach can model arbitrary battery and electrode configurations. The consistency of multi-physics solution also allowsmore » for consideration of a wide array of initial conditions and load cases. The formulation accounts for spatio-temporal variations of material and state properties such as electrode/void volume fractions and anisotropic conductivities. The governing differential equations are discretized using the finite element method and solved using a nonlinearly consistent approach that provides robust stability and convergence. The new formulation was validated for standard Li-ion cells and compared against experiments. Finally, its scope and ability to capture spatio-temporal variations of potential and lithium distribution is demonstrated on a prototypical three-dimensional electrode problem.« less
Comparative Bioavailability of Sulindac in Capsule and Tablet Formulations
Reid, Joel M.; Mandrekar, Sumithra J.; Carlson, Elsa C.; Harmsen, W. Scott; Green, Erin M.; McGovern, Renee M.; Szabo, Eva; Ames, Matthew M.; Boring, Daniel; Limburg, Paul J.
2008-01-01
The cyclooxygenase-2 (COX-2) enzyme appears to be an important target for cancer chemoprevention. Given the recent emergence of potentially serious cardiovascular toxicity associated with selective COX-2 inhibitors, nonsteroidal antiinflammatory drugs (NSAIDs), which inhibit both COX-1 and COX-2, have received renewed attention as candidate chemoprevention agents. Sulindac has demonstrated consistent chemopreventive potential in preclinical studies, as well as in a limited number of clinical trials reported to date. For the current pharmacokinetic study, sulindac capsules were prepared to facilitate ample agent supplies for future intervention studies. Encapsulation of the parent compound (sulindac sulfoxide) can be readily accomplished, but the effects of alternate formulations on bioavailability have not been rigorously examined. In the present single-dose, two-period crossover trial, we conducted pharmacokinetic analyses of sulindac in capsule (test) versus tablet (reference) formulations. Overall, bioavailability appeared to be higher for the capsule compared to the tablet formulation, based on test-to-reference pharmacokinetic parameter ratios for the parent compound. However, additional analyses based on the sulfide and sulfone metabolites of sulindac with the same pharmacokinetic parameters indicated similar chemopreventive exposures between the capsule and tablet formulations. These data support the use of sulindac capsules, which can be readily prepared with matching placebos, in future blinded chemoprevention trials. PMID:18349286
Crossing over...Markov meets Mendel.
Mneimneh, Saad
2012-01-01
Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the mathematically more elaborate parts. I kept those as separate sections in the exposition.
Crossing Over…Markov Meets Mendel
Mneimneh, Saad
2012-01-01
Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage—one of the first efforts towards a computational approach to biology—relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the mathematically more elaborate parts. I kept those as separate sections in the exposition. PMID:22629235
NASA Astrophysics Data System (ADS)
Michalski, Krzysztof A.; Lin, Hung-I.
2018-01-01
Second-order asymptotic formulas for the electromagnetic fields of a horizontal electric dipole over an imperfectly conducting half-space are derived using the modified saddle-point method. Application examples are presented for ordinary and plasmonic media, and the accuracy of the new formulation is assessed by comparisons with two alternative state-of-the-art theories and with the rigorous results of numerical integration.
Nouvelles bornes et estimations pour les milieux poreux à matrice rigide parfaitement plastique
NASA Astrophysics Data System (ADS)
Bilger, Nicolas; Auslender, François; Bornert, Michel; Masson, Renaud
We derive new rigorous bounds and self-consistent estimates for the effective yield surface of porous media with a rigid perfectly plastic matrix and a microstructure similar to Hashin's composite spheres assemblage. These results arise from a homogenisation technique that combines a pattern-based modelling for linear composite materials and a variational formulation for nonlinear media. To cite this article: N. Bilger et al., C. R. Mecanique 330 (2002) 127-132.
Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix
NASA Astrophysics Data System (ADS)
Pastor, Franck; Pastor, Joseph; Kondo, Djimedo
2012-03-01
Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).
Evaluation of candidate working fluid formulations for the electrothermal - chemical wind tunnel
NASA Technical Reports Server (NTRS)
Akyurtlu, Jale F.; Akyurtlu, Ates
1991-01-01
Various candidate chemical formulations are evaluated as a precursor for the working fluid to be used in the electrothermal hypersonic test facility which was under study at the NASA LaRC Hypersonic Propulsion Branch, and the formulations which would most closely satisfy the goals set for the test facility are identified. Out of the four tasks specified in the original proposal, the first two, literature survey and collection of kinetic data, are almost completed. The third task, work on a mathematical model of the ET wind tunnel operation, was started and concentrated on the expansion in the nozzle with finite rate kinetics.
Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models
NASA Astrophysics Data System (ADS)
Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.
2012-04-01
The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.
NASA Astrophysics Data System (ADS)
Smits, Kathleen M.; Ngo, Viet V.; Cihan, Abdullah; Sakaki, Toshihiro; Illangasekare, Tissa H.
2012-12-01
Bare soil evaporation is a key process for water exchange between the land and the atmosphere and an important component of the water balance. However, there is no agreement on the best modeling methodology to determine evaporation under different atmospheric boundary conditions. Also, there is a lack of directly measured soil evaporation data for model validation to compare these methods to establish the validity of their mathematical formulations. Thus, a need exists to systematically compare evaporation estimates using existing methods to experimental observations. The goal of this work is to test different conceptual and mathematical formulations that are used to estimate evaporation from bare soils to critically investigate various formulations and surface boundary conditions. Such a comparison required the development of a numerical model that has the ability to incorporate these boundary conditions. For this model, we modified a previously developed theory that allows nonequilibrium liquid/gas phase change with gas phase vapor diffusion to better account for dry soil conditions. Precision data under well-controlled transient heat and wind boundary conditions were generated, and results from numerical simulations were compared with experimental data. Results demonstrate that the approaches based on different boundary conditions varied in their ability to capture different stages of evaporation. All approaches have benefits and limitations, and no one approach can be deemed most appropriate for every scenario. Comparisons of different formulations of the surface boundary condition validate the need for further research on heat and vapor transport processes in soil for better modeling accuracy.
Mathematics education practice in Nigeria: Its impact in a post-colonial era
NASA Astrophysics Data System (ADS)
Enime, Noble O. J.
This qualitative research method of study examined the impacts of the Nigerian pre-independence era Mathematics Education Practice on the Post-Colonial era Mathematics Education Practice. The study was designed to gather qualitative information related to Pre-independence and Postcolonial era data related to Mathematics Education Practice in Nigeria (Western, Eastern and the Middle Belt) using interview questions. Data was collected through face to face interviews. Over ten themes emerged from these qualitative interview questions when data was analyzed. Some of the themes emerging from the sub questions were as follows. "Mentally mature to understand the mathematics" and "Not mentally mature to understand the mathematics", "mentally mature to understand the mathematics, with the help of others" and "Not Sure". Others were "Contented with Age of Enrollment" and "Not contented with Age of Enrollment". From the questions of type of school attended and liking of mathematics the following themes emerged: "Attended UPE (Universal Primary Education) and understood Mathematics", and "Attended Standard Education System and did not like Mathematics". Connections between the liking of mathematics and the respondents' eventual careers were seen through the following themes that emerged. "Biological Sciences based career and enjoyed High School Mathematics Experience", "Economics and Business Education based career and enjoyed High School Mathematics Experience" and five more themes. The themes, "Very helpful" and "Unhelpful" emerged from the question concerning parents and students' homework. Some of the themes emerging from the interviews were as follows: "Awesome because of method of Instruction of Mathematics", "Awesome because Mathematics was easy", "Awesome because I had a Good Teacher or Teachers" and four other themes, "Like and dislike of Mathematics", "Heavy work load", "Subject matter content" and "Rigor of instruction". More emerging themes are presented in this document in Chapter IV. The emerging themes suggested that the influence Nigerian Colonial era Mathematics Education Practice had on the independent Nigerian state is yet to completely diminish. The following are among the conclusions drawn n from the study. Student's enrollment age appeared to generally have an influence over the performance in mathematics at all levels of school. Also, students that had encouraging parents were likely to enjoy learning mathematics, while students that attended mission schools were likely to be successful in mathematics. The students whose parents were educated were likely to be successful in Mathematics.
Dynamic Stochastic Control of Freeway Corridor Systems : Summary and Project Overview
DOT National Transportation Integrated Search
1978-12-01
Systematic methodological approaches to overall traffic management from both short-term (real-time) and long-term (planning) perspectives have been developed. The approach embodies formulation and solution of interrelated mathematical problems from o...
Richard P. Feynman and the Feynman Diagrams
available in full-text and on the Web. Documents: A Theorem and Its Application to Finite Tampers, DOE Fermi-Thomas Theory; DOE Technical Report, April 28, 1947 Mathematical Formulation of the Quantum Theory
NASA Astrophysics Data System (ADS)
Stöckl, Stefan; Rotach, Mathias W.; Kljun, Natascha
2018-01-01
We discuss the results of Gibson and Sailor (Boundary-Layer Meteorol 145:399-406, 2012) who suggest several corrections to the mathematical formulation of the Lagrangian particle dispersion model of Rotach et al. (Q J R Meteorol Soc 122:367-389, 1996). While most of the suggested corrections had already been implemented in the 1990s, one suggested correction raises a valid point, but results in a violation of the well-mixed criterion. Here we improve their idea and test the impact on model results using a well-mixed test and a comparison with wind-tunnel experimental data. The new approach results in similar dispersion patterns as the original approach, while the approach suggested by Gibson and Sailor leads to erroneously reduced concentrations near the ground in convective and especially forced convective conditions.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L
2015-08-01
Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
Solving ordinary differential equations by electrical analogy: a multidisciplinary teaching tool
NASA Astrophysics Data System (ADS)
Sanchez Perez, J. F.; Conesa, M.; Alhama, I.
2016-11-01
Ordinary differential equations are the mathematical formulation for a great variety of problems in science and engineering, and frequently, two different problems are equivalent from a mathematical point of view when they are formulated by the same equations. Students acquire the knowledge of how to solve these equations (at least some types of them) using protocols and strict algorithms of mathematical calculation without thinking about the meaning of the equation. The aim of this work is that students learn to design network models or circuits in this way; with simple knowledge of them, students can establish the association of electric circuits and differential equations and their equivalences, from a formal point of view, that allows them to associate knowledge of two disciplines and promote the use of this interdisciplinary approach to address complex problems. Therefore, they learn to use a multidisciplinary tool that allows them to solve these kinds of equations, even students of first course of engineering, whatever the order, grade or type of non-linearity. This methodology has been implemented in numerous final degree projects in engineering and science, e.g., chemical engineering, building engineering, industrial engineering, mechanical engineering, architecture, etc. Applications are presented to illustrate the subject of this manuscript.
Patel, Niketkumar; Jain, Shashank; Madan, Parshotam; Lin, Senshang
2016-11-01
The objective of this investigation is to develop mathematical equation to understand the impact of variables and establish statistical control over transdermal iontophoretic delivery of tacrine hydrochloride. In addition, possibility of using conductivity measurements as a tool of predicting ionic mobility of the participating ions for the application of iontophoretic delivery was explored. Central composite design was applied to study effect of independent variables like current strength, buffer molarity, and drug concentration on iontophoretic tacrine permeation flux. Molar conductivity was determined to evaluate electro-migration of tacrine ions with application of Kohlrausch's law. The developed mathematic equation not only reveals drug concentration as the most significant variable regulating tacrine permeation, followed by current strength and buffer molarity, but also is capable to optimize tacrine permeation with respective combination of independent variables to achieve desired therapeutic plasma concentration of tacrine in treatment of Alzheimer's disease. Moreover, relative higher mobility of sodium and chloride ions was observed as compared to estimated tacrine ion mobility. This investigation utilizes the design of experiment approach and extends the primary understanding of imapct of electronic and formulation variables on the tacrine permeation for the formulation development of iontophoretic tacrine delivery.
NASA Astrophysics Data System (ADS)
Korayem, M. H.; Shafei, A. M.
2013-02-01
The goal of this paper is to describe the application of Gibbs-Appell (G-A) formulation and the assumed modes method to the mathematical modeling of N-viscoelastic link manipulators. The paper's focus is on obtaining accurate and complete equations of motion which encompass the most related structural properties of lightweight elastic manipulators. In this study, two important damping mechanisms, namely, the structural viscoelasticity (Kelvin-Voigt) effect (as internal damping) and the viscous air effect (as external damping) have been considered. To include the effects of shear and rotational inertia, the assumption of Timoshenko beam (TB) theory (TBT) has been applied. Gravity, torsion, and longitudinal elongation effects have also been included in the formulations. To systematically derive the equations of motion and improve the computational efficiency, a recursive algorithm has been used in the modeling of the system. In this algorithm, all the mathematical operations are carried out by only 3×3 and 3×1 matrices. Finally, a computational simulation for a manipulator with two elastic links is performed in order to verify the proposed method.
NASA Astrophysics Data System (ADS)
Reis, T.; Phillips, T. N.
2008-12-01
In this reply to the comment by Lallemand and Luo, we defend our assertion that the alternative approach for the solution of the dispersion relation for a generalized lattice Boltzmann dispersion equation [T. Reis and T. N. Phillips, Phys. Rev. E 77, 026702 (2008)] is mathematically transparent, elegant, and easily justified. Furthermore, the rigorous perturbation analysis used by Reis and Phillips does not require the reciprocals of the relaxation parameters to be small.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Endobiogeny: a global approach to systems biology (part 1 of 2).
Lapraz, Jean-Claude; Hedayat, Kamyar M
2013-01-01
Endobiogeny is a global systems approach to human biology that may offer an advancement in clinical medicine based in scientific principles of rigor and experimentation and the humanistic principles of individualization of care and alleviation of suffering with minimization of harm. Endobiogeny is neither a movement away from modern science nor an uncritical embracing of pre-rational methods of inquiry but a synthesis of quantitative and qualitative relationships reflected in a systems-approach to life and based on new mathematical paradigms of pattern recognition.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Solving the multi-frequency electromagnetic inverse source problem by the Fourier method
NASA Astrophysics Data System (ADS)
Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi
2018-07-01
This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.
Understanding the Lomb–Scargle Periodogram
NASA Astrophysics Data System (ADS)
VanderPlas, Jacob T.
2018-05-01
The Lomb–Scargle periodogram is a well-known algorithm for detecting and characterizing periodic signals in unevenly sampled data. This paper presents a conceptual introduction to the Lomb–Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb–Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation.
Selection theory of free dendritic growth in a potential flow.
von Kurnatowski, Martin; Grillenbeck, Thomas; Kassner, Klaus
2013-04-01
The Kruskal-Segur approach to selection theory in diffusion-limited or Laplacian growth is extended via combination with the Zauderer decomposition scheme. This way nonlinear bulk equations become tractable. To demonstrate the method, we apply it to two-dimensional crystal growth in a potential flow. We omit the simplifying approximations used in a preliminary calculation for the same system [Fischaleck, Kassner, Europhys. Lett. 81, 54004 (2008)], thus exhibiting the capability of the method to extend mathematical rigor to more complex problems than hitherto accessible.
Jia, Jianhua; Liu, Zi; Xiao, Xuan; Liu, Bingxiang; Chou, Kuo-Chen
2016-06-07
Carbonylation is a posttranslational modification (PTM or PTLM), where a carbonyl group is added to lysine (K), proline (P), arginine (R), and threonine (T) residue of a protein molecule. Carbonylation plays an important role in orchestrating various biological processes but it is also associated with many diseases such as diabetes, chronic lung disease, Parkinson's disease, Alzheimer's disease, chronic renal failure, and sepsis. Therefore, from the angles of both basic research and drug development, we are facing a challenging problem: for an uncharacterized protein sequence containing many residues of K, P, R, or T, which ones can be carbonylated, and which ones cannot? To address this problem, we have developed a predictor called iCar-PseCp by incorporating the sequence-coupled information into the general pseudo amino acid composition, and balancing out skewed training dataset by Monte Carlo sampling to expand positive subset. Rigorous target cross-validations on a same set of carbonylation-known proteins indicated that the new predictor remarkably outperformed its existing counterparts. For the convenience of most experimental scientists, a user-friendly web-server for iCar-PseCp has been established at http://www.jci-bioinfo.cn/iCar-PseCp, by which users can easily obtain their desired results without the need to go through the complicated mathematical equations involved. It has not escaped our notice that the formulation and approach presented here can also be used to analyze many other problems in computational proteomics.
Modeling of chemical inhibition from amyloid protein aggregation kinetics.
Vázquez, José Antonio
2014-02-27
The process of amyloid proteins aggregation causes several human neuropathologies. In some cases, e.g. fibrillar deposits of insulin, the problems are generated in the processes of production and purification of protein and in the pump devices or injectable preparations for diabetics. Experimental kinetics and adequate modelling of chemical inhibition from amyloid aggregation are of practical importance in order to study the viable processing, formulation and storage as well as to predict and optimize the best conditions to reduce the effect of protein nucleation. In this manuscript, experimental data of insulin, Aβ42 amyloid protein and apomyoglobin fibrillation from recent bibliography were selected to evaluate the capability of a bivariate sigmoid equation to model them. The mathematical functions (logistic combined with Weibull equation) were used in reparameterized form and the effect of inhibitor concentrations on kinetic parameters from logistic equation were perfectly defined and explained. The surfaces of data were accurately described by proposed model and the presented analysis characterized the inhibitory influence on the protein aggregation by several chemicals. Discrimination between true and apparent inhibitors was also confirmed by the bivariate equation. EGCG for insulin (working at pH = 7.4/T = 37°C) and taiwaniaflavone for Aβ42 were the compounds studied that shown the greatest inhibition capacity. An accurate, simple and effective model to investigate the inhibition of chemicals on amyloid protein aggregation has been developed. The equation could be useful for the clear quantification of inhibitor potential of chemicals and rigorous comparison among them.
Bytchenkoff, Dimitri; Rodts, Stéphane
2011-01-01
The form of the two-dimensional (2D) NMR-relaxation spectra--which allow to study interstitial fluid dynamics in diffusive systems by correlating spin-lattice (T(1)) and spin-spin (T(2)) relaxation times--has given rise to numerous conjectures. Herein we find analytically a number of fundamental structural properties of the spectra: within the eigen-modes formalism, we establish relationships between the signs and intensities of the diagonal and cross-peaks in spectra obtained by various 1 and 2D NMR-relaxation techniques, reveal symmetries of the spectra and uncover interdependence between them. We investigate more specifically a practically important case of porous system that has sets of T(1)- and T(2)-eigenmodes and eigentimes similar to each other by applying the perturbation theory. Furthermore we provide a comparative analysis of the application of the, mathematically more rigorous, eigen-modes formalism and the, rather more phenomenological, first-order two-site exchange model to diffusive systems. Finally we put the results that we could formulate analytically to the test by comparing them with computer-simulations for 2D porous model systems. The structural properties, in general, are to provide useful clues for assignment and analysis of relaxation spectra. The most striking of them--the presence of negative peaks--underlines an urgent need for improvement of the current 2D Inverse Laplace Transform (ILT) algorithm used for calculation of relaxation spectra from NMR raw data. Copyright © 2010 Elsevier Inc. All rights reserved.
PediaFlow™ Maglev Ventricular Assist Device: A Prescriptive Design Approach.
Antaki, James F; Ricci, Michael R; Verkaik, Josiah E; Snyder, Shaun T; Maul, Timothy M; Kim, Jeongho; Paden, Dave B; Kameneva, Marina V; Paden, Bradley E; Wearden, Peter D; Borovetz, Harvey S
2010-03-01
This report describes a multi-disciplinary program to develop a pediatric blood pump, motivated by the critical need to treat infants and young children with congenital and acquired heart diseases. The unique challenges of this patient population require a device with exceptional biocompatibility, miniaturized for implantation up to 6 months. This program implemented a collaborative, prescriptive design process, whereby mathematical models of the governing physics were coupled with numerical optimization to achieve a favorable compromise among several competing design objectives. Computational simulations of fluid dynamics, electromagnetics, and rotordynamics were performed in two stages: first using reduced-order formulations to permit rapid optimization of the key design parameters; followed by rigorous CFD and FEA simulations for calibration, validation, and detailed optimization. Over 20 design configurations were initially considered, leading to three pump topologies, judged on the basis of a multi-component analysis including criteria for anatomic fit, performance, biocompatibility, reliability, and manufacturability. This led to fabrication of a mixed-flow magnetically levitated pump, the PF3, having a displaced volume of 16.6 cc, approximating the size of a AA battery and producing a flow capacity of 0.3-1.5 L/min. Initial in vivo evaluation demonstrated excellent hemocompatibility after 72 days of implantation in an ovine. In summary, combination of prescriptive and heuristic design principles have proven effective in developing a miniature magnetically levitated blood pump with excellent performance and biocompatibility, suitable for integration into chronic circulatory support system for infants and young children; aiming for a clinical trial within 3 years.
Lin, Hao; Deng, En-Ze; Ding, Hui; Chen, Wei; Chou, Kuo-Chen
2014-01-01
The σ54 promoters are unique in prokaryotic genome and responsible for transcripting carbon and nitrogen-related genes. With the avalanche of genome sequences generated in the postgenomic age, it is highly desired to develop automated methods for rapidly and effectively identifying the σ54 promoters. Here, a predictor called ‘iPro54-PseKNC’ was developed. In the predictor, the samples of DNA sequences were formulated by a novel feature vector called ‘pseudo k-tuple nucleotide composition’, which was further optimized by the incremental feature selection procedure. The performance of iPro54-PseKNC was examined by the rigorous jackknife cross-validation tests on a stringent benchmark data set. As a user-friendly web-server, iPro54-PseKNC is freely accessible at http://lin.uestc.edu.cn/server/iPro54-PseKNC. For the convenience of the vast majority of experimental scientists, a step-by-step protocol guide was provided on how to use the web-server to get the desired results without the need to follow the complicated mathematics that were presented in this paper just for its integrity. Meanwhile, we also discovered through an in-depth statistical analysis that the distribution of distances between the transcription start sites and the translation initiation sites were governed by the gamma distribution, which may provide a fundamental physical principle for studying the σ54 promoters. PMID:25361964
The flow of plasma in the solar terrestrial environment
NASA Technical Reports Server (NTRS)
Schunk, R. W.
1992-01-01
The overall goal of our NASA Theory Program is to study the coupling, time delays, and feedback mechanisms between the various regions of the solar-terrestrial system in a self-consistent, quantitative manner. To accomplish this goal, it will eventually be necessary to have time-dependent macroscopic models of the different regions of the solar-terrestrial system and we are continually working toward this goal. However, our immediate emphasis is on the near-earth plasma environment, including the ionosphere, the plasmasphere, and the polar wind. In this area, we have developed unique global models that allow us to study the coupling between the different regions. Another important aspect of our NASA Theory Program concerns the effect that localized structure has on the macroscopic flow in the ionosphere, plasmasphere, thermosphere, and polar wind. The localized structure can be created by structured magnetospheric inputs (i.e., structured plasma convection, particle precipitation or Birkeland current patterns) or time variations in these inputs due to storms and substorms. Also, some of the plasma flows that we predict with our macroscopic models may be unstable, and another one of our goals is to examine the stability of our predicted flows. Because time-dependent, three-dimensional numerical models of the solar-terrestrial environment generally require extensive computer resources, they are usually based on relatively simple mathematical formulations (i.e., simple MHD or hydrodynamic formulation). Therefore, another long-range goal of our NASA Theory Program is to study the conditions under which various mathematical formulations can be applied to specific solar-terrestrial regions. This may involve a detailed comparison of kinetic, semikinetic, and hydrodynamic predictions for a given polar wind scenario or it may involve the comparison of a small-scale particle-in-cell (PIC) simulation of a plasma expansion event with a similar macroscopic expansion event. The different mathematical formulations have different strengths and weaknesses and a careful comparison of model predictions for similar geophysical situations will provide insight into when the various models can be used with confidence.
NASA Astrophysics Data System (ADS)
Semenov, Alexander; Babikov, Dmitri
2013-11-01
We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
Mathematical Modeling of Resonant Processes in Confined Geometry of Atomic and Atom-Ion Traps
NASA Astrophysics Data System (ADS)
Melezhik, Vladimir S.
2018-02-01
We discuss computational aspects of the developed mathematical models for resonant processes in confined geometry of atomic and atom-ion traps. The main attention is paid to formulation in the nondirect product discrete-variable representation (npDVR) of the multichannel scattering problem with nonseparable angular part in confining traps as the boundary-value problem. Computational efficiency of this approach is demonstrated in application to atomic and atom-ion confinement-induced resonances we predicted recently.
1983-12-01
grade levels. Chapter 2 discusses the formulation of the model. It highlights the theoretical and mathematical concepts perti- nant to the model...assignments. This is to insure the professional development of the soldier and is in accordance with the "whole man" concept. 11. IALUI2U Lvels !Wii...objective function can be mathematically expressed as: (aijk (bk ijk This objective function assesses the same penalty to each vacancy of each type of
NASA Astrophysics Data System (ADS)
Neustupa, Tomáš
2017-07-01
The paper presents the mathematical model of a steady 2-dimensional viscous incompressible flow through a radial blade machine. The corresponding boundary value problem is studied in the rotating frame. We provide the classical and weak formulation of the problem. Using a special form of the so called "artificial" or "natural" boundary condition on the outflow, we prove the existence of a weak solution for an arbitrarily large inflow.
Modeling Flow in Porous Media with Double Porosity/Permeability.
NASA Astrophysics Data System (ADS)
Seyed Joodat, S. H.; Nakshatrala, K. B.; Ballarini, R.
2016-12-01
Although several continuum models are available to study the flow of fluids in porous media with two pore-networks [1], they lack a firm theoretical basis. In this poster presentation, we will present a mathematical model with firm thermodynamic basis and a robust computational framework for studying flow in porous media that exhibit double porosity/permeability. The mathematical model will be derived by appealing to the maximization of rate of dissipation hypothesis, which ensures that the model is in accord with the second law of thermodynamics. We will also present important properties that the solutions under the model satisfy, along with an analytical solution procedure based on the Green's function method. On the computational front, a stabilized mixed finite element formulation will be derived based on the variational multi-scale formalism. The equal-order interpolation, which is computationally the most convenient, is stable under this formulation. The performance of this formulation will be demonstrated using patch tests, numerical convergence study, and representative problems. It will be shown that the pressure and velocity profiles under the double porosity/permeability model are qualitatively and quantitatively different from the corresponding ones under the classical Darcy equations. Finally, it will be illustrated that the surface pore-structure is not sufficient in characterizing the flow through a complex porous medium, which pitches a case for using advanced characterization tools like micro-CT. References [1] G. I. Barenblatt, I. P. Zheltov, and I. N. Kochina, "Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks [strata]," Journal of Applied Mathematics and Mechanics, vol. 24, pp. 1286-1303, 1960.
Vibrational relaxation in hypersonic flow fields
NASA Technical Reports Server (NTRS)
Meador, Willard E.; Miner, Gilda A.; Heinbockel, John H.
1993-01-01
Mathematical formulations of vibrational relaxation are derived from first principles for application to fluid dynamic computations of hypersonic flow fields. Relaxation within and immediately behind shock waves is shown to be substantially faster than that described in current numerical codes. The result should be a significant reduction in nonequilibrium radiation overshoot in shock layers and in radiative heating of hypersonic vehicles; these results are precisely the trends needed to bring theoretical predictions more in line with flight data. Errors in existing formulations are identified and qualitative comparisons are made.
From Loss of Memory to Poisson.
ERIC Educational Resources Information Center
Johnson, Bruce R.
1983-01-01
A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)
Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.
Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D
2017-02-01
We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.
Performance evaluation of a bigrating as a beam splitter.
Hwang, R B; Peng, S T
1997-04-01
The design of a bigrating for use as a beam splitter is presented. It is based on a rigorous formulation of plane-wave scattering by a bigrating that is composed of two individual gratings oriented in different directions. Numerical results are carried out to optimize the design of a bigrating to perform 1 x 4 beam splitting in two dimensions and to examine its fabrication and operation tolerances. It is found that a bigrating can be designed to perform two functions: beam splitting and polarization purification.
The effects of anisotropy on the nonlinear behavior of bridged cracks in long strips
NASA Technical Reports Server (NTRS)
Ballarini, R.; Luo, H. A.
1994-01-01
A model which can be used to predict the two-dimensional nonlinear behavior of bridged cracks in orthotropic strips is presented. The results obtained using a singular integral equation formulation which incorporates the anisotropy rigorously show that, although the effects of anisotropy are significant, the nondimensional quantities employed by Cox and Marshall can generate nearly universal results (R-curves, for example) for different levels of relative anisotropy. The role of composite constituent properties in the behavior of bridged cracks is clarified.
2008-10-30
rigorous Poisson-based methods generally apply a Lee-Richards mo- lecular surface.9 This surface is considered the de facto description for continuum...definition and calculation of the Born radii. To evaluate the Born radii, two approximations are invoked. The first is the Coulomb field approximation (CFA...energy term, and depending on the particular GB formulation, higher-order non- Coulomb correction terms may be added to the Born radii to account for the
Ray-optical theory of broadband partially coherent emission
NASA Astrophysics Data System (ADS)
Epstein, Ariel; Tessler, Nir; Einziger, Pinchas D.
2013-04-01
We present a rigorous formulation of the effects of spectral broadening on emission of partially coherent source ensembles embedded in multilayered formations with arbitrarily shaped interfaces, provided geometrical optics is valid. The resulting ray-optical theory, applicable to a variety of optical systems from terahertz lenses to photovoltaic cells, quantifies the fundamental interplay between bandwidth and layer dimensions, and sheds light on common practices in optical analysis of statistical fields, e.g., disregarding multiple reflections or neglecting interference cross terms.
Investigating adaptive reasoning and strategic competence: Difference male and female
NASA Astrophysics Data System (ADS)
Syukriani, Andi; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The series of adaptive reasoning and strategic competencies represent the five components of mathematical proficiency to describe the students' mathematics learning success. Gender contribute to the problem-solving process. This qualitative research approach investigated the adaptive reasoning and strategic competence aspects of a male student and a female student when they solved mathematical problem. They were in the eleventh grade of high school in Makassar. Both also had similar mathematics ability and were in the highest category. The researcher as the main instrument used secondary instrument to obtain the appropriate subject and to investigate the aspects of adaptive reasoning and strategic competence. Test of mathematical ability was used to locate the subjects with similar mathematical ability. The unstructured guideline interview was used to investigate aspects of adaptive reasoning and strategic competence when the subject completed the task of mathematical problem. The task of mathematical problem involves several concepts as the right solution, such as the circle concept, triangle concept, trigonometry concept, and Pythagoras concept. The results showed that male and female subjects differed in applying a strategy to understand, formulate and represent the problem situation. Furthermore, both also differed in explaining the strategy used and the relationship between concepts and problem situations.
Nonlinear analysis of a model of vascular tumour growth and treatment
NASA Astrophysics Data System (ADS)
Tao, Youshan; Yoshida, Norio; Guo, Qian
2004-05-01
We consider a mathematical model describing the evolution of a vascular tumour in response to traditional chemotherapy. The model is a free boundary problem for a system of partial differential equations governing intratumoural drug concentration, cancer cell density and blood vessel density. Tumour cells consist of two types of competitive cells that have different proliferation rates and different sensitivities to drugs. The balance between cell proliferation and death generates a velocity field that drives tumour cell movement. The tumour surface is a moving boundary. The purpose of this paper is to establish a rigorous mathematical analysis of the model for studying the dynamics of intratumoural blood vessels and to explore drug dosage for the successful treatment of a tumour. We also study numerically the competitive effects of the two cell types on tumour growth.
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Locating an imaging radar in Canada for identifying spaceborne objects
NASA Astrophysics Data System (ADS)
Schick, William G.
1992-12-01
This research presents a study of the maximal coverage p-median facility location problem as applied to the location of an imaging radar in Canada for imaging spaceborne objects. The classical mathematical formulation of the maximal coverage p-median problem is converted into network-flow with side constraint formulations that are developed using a scaled down version of the imaging radar location problem. Two types of network-flow with side constraint formulations are developed: a network using side constraints that simulates the gains in a generalized network; and a network resembling a multi-commodity flow problem that uses side constraints to force flow along identical arcs. These small formulations are expanded to encompass a case study using 12 candidate radar sites, and 48 satellites divided into three states. SAS/OR PROC NETFLOW was used to solve the network-flow with side constraint formulations. The case study show that potential for both formulations, although the simulated gains formulation encountered singular matrix computational difficulties as a result of the very organized nature of its side constraint matrix. The multi-commodity flow formulation, when combined with equi-distribution of flow constraints, provided solutions for various values of p, the number of facilities to be selected.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Solymosi, Tamás; Ötvös, Zsolt; Angi, Réka; Ordasi, Betti; Jordán, Tamás; Semsey, Sándor; Molnár, László; Ránky, Soma; Filipcsei, Genovéva; Heltovics, Gábor; Glavinas, Hristos
2017-10-30
Particle size reduction of drug crystals in the presence of surfactants (often called "top-down" production methods) is a standard approach used in the pharmaceutical industry to improve bioavailability of poorly soluble drugs. Based on the mathematical model used to predict the fraction dose absorbed this formulation approach is successful when dissolution rate is the main rate limiting factor of oral absorption. In case compound solubility is also a major factor this approach might not result in an adequate improvement in bioavailability. Abiraterone acetate is poorly water soluble which is believed to be responsible for its very low bioavailability in the fasted state and its significant positive food effect. In this work, we have successfully used in vitro dissolution, solubility and permeability measurements in biorelevant media to describe the dissolution characteristics of different abiraterone acetate formulations. Mathematical modeling of fraction dose absorbed indicated that reducing the particle size of the drug cannot be expected to result in significant improvement in bioavailability in the fasted state. In the fed state, the same formulation approach can result in a nearly complete absorption of the dose; thereby, further increasing the food effect. Using a "bottom-up" formulation method we improved both the dissolution rate and the apparent solubility of the compound. In beagle dog studies, this resulted in a ≫>10-fold increase in bioavailability in the fasted state when compared to the marketed drug and the elimination of the food effect. Calculated values of fraction dose absorbed were in agreement with the observed relative bioavailability values in beagle dogs. Copyright © 2017 Elsevier B.V. All rights reserved.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Design Tools for Cost-Effective Implementation of Planetary Protection Requirements
NASA Technical Reports Server (NTRS)
Hamlin, Louise; Belz, Andrea; Evans, Michael; Kastner, Jason; Satter, Celeste; Spry, Andy
2006-01-01
Since the Viking missions to Mars in the 1970s, accounting for the costs associated with planetary protection implementation has not been done systematically during early project formulation phases, leading to unanticipated costs during subsequent implementation phases of flight projects. The simultaneous development of more stringent planetary protection requirements, resulting from new knowledge about the limits of life on Earth, together with current plans to conduct life-detection experiments on a number of different solar system target bodies motivates a systematic approach to integrating planetary protection requirements and mission design. A current development effort at NASA's Jet Propulsion Laboratory is aimed at integrating planetary protection requirements more fully into the early phases of mission architecture formulation and at developing tools to more rigorously predict associated cost and schedule impacts of architecture options chosen to meet planetary protection requirements.
Heat transfer evaluation in a plasma core reactor
NASA Technical Reports Server (NTRS)
Smith, D. E.; Smith, T. M.; Stoenescu, M. L.
1976-01-01
Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.
Systematic and reliable multiscale modelling of lithium batteries
NASA Astrophysics Data System (ADS)
Atalay, Selcuk; Schmuck, Markus
2017-11-01
Motivated by the increasing interest in lithium batteries as energy storage devices (e.g. cars/bycicles/public transport, social robot companions, mobile phones, and tablets), we investigate three basic cells: (i) a single intercalation host; (ii) a periodic arrangement of intercalation hosts; and (iii) a rigorously upscaled formulation of (ii) as initiated in. By systematically accounting for Li transport and interfacial reactions in (i)-(iii), we compute the associated chracteristic current-voltage curves and power densities. Finally, we discuss the influence of how the intercalation particles are arranged. Our findings are expected to improve the understanding of how microscopic properties affect the battery behaviour observed on the macroscale and at the same time, the upscaled formulation (iii) serves as an efficient computational tool. This work has been supported by EPSRC, UK, through the Grant No. EP/P011713/1.
Operational formulation of time reversal in quantum theory
NASA Astrophysics Data System (ADS)
Oreshkov, Ognyan; Cerf, Nicolas J.
2015-10-01
The symmetry of quantum theory under time reversal has long been a subject of controversy because the transition probabilities given by Born’s rule do not apply backward in time. Here, we resolve this problem within a rigorous operational probabilistic framework. We argue that reconciling time reversal with the probabilistic rules of the theory requires a notion of operation that permits realizations through both pre- and post-selection. We develop the generalized formulation of quantum theory that stems from this approach and give a precise definition of time-reversal symmetry, emphasizing a previously overlooked distinction between states and effects. We prove an analogue of Wigner’s theorem, which characterizes all allowed symmetry transformations in this operationally time-symmetric quantum theory. Remarkably, we find larger classes of symmetry transformations than previously assumed, suggesting a possible direction in the search for extensions of known physics.
NASA Astrophysics Data System (ADS)
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.
Formulation and Testing of a Novel River Nitrification Model
The nitrification process in many riverwater quality models has been approximated by a simple first order dependency on the water column ammonia concentration, while the benthic contribution has routinely been neglected. In this study a mathematical framework was developed for se...
ERIC Educational Resources Information Center
O'Brien, Tom
2011-01-01
This article features a mathematical game called "Mystery Person." The author describes how the Mystery Person game was tried with first-graders [age 6]. The Mystery games involve the generation of key questions, the coordination of information--often very complex information--and the formulation of consequences based on this…
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
ERIC Educational Resources Information Center
Hillen, Amy F.; Watanabe, Tad
2013-01-01
Recent documents suggest that all students, even young children, should have opportunities to engage in reasoning and proof (CCSSI 2010; NCTM 2000, 2006, 2009). One mathematical practice that is central to reasoning and proof is making conjectures (CCSSI 2010; NCTM 2000; Stylianides 2008). In the elementary grades, "formulating conjectures…
Mathematical literacy skills of students' in term of gender differences
NASA Astrophysics Data System (ADS)
Lailiyah, Siti
2017-08-01
Good mathematical literacy skills will hopefully help maximize the tasks and role of the prospective teacher in activities. Mathematical literacy focus on students' ability to analyze, justify, and communicate ideas effectively, formulate, solve and interpret mathematical problems in a variety of forms and situations. The purpose of this study is to describe the mathematical literacy skills of the prospective teacher in term of gender differences. This research used a qualitative approach with a case study. Subjects of this study were taken from two male students and two female students of the mathematics education prospective teacher who have followed Community Service Program (CSP) in literacy. Data were collected through methods think a loud and interviews. Four prospective teachers were asked to fill mathematical literacy test and video taken during solving this test. Students are required to convey loud what he was thinking when solving problems. After students get the solution, researchers grouped the students' answers and results think aloud. Furthermore, the data are grouped and analyzed according to indicators of mathematical literacy skills. Male students have good of each indicator in mathematical literacy skills (the first indicator to the sixth indicator). Female students have good of mathematical literacy skills (the first indicator, the second indicator, the third indicator, the fourth indicator and the sixth indicator), except for the fifth indicators that are enough.
Modeling and optimization of dough recipe for breadsticks
NASA Astrophysics Data System (ADS)
Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.
2018-05-01
During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant
NASA Astrophysics Data System (ADS)
Kuznetsova, Svetlana A.
2015-01-01
Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.
Carl Neumann versus Rudolf Clausius on the propagation of electrodynamic potentials
NASA Astrophysics Data System (ADS)
Archibald, Thomas
1986-09-01
In the late 1860's, German electromagnetic theorists employing W. Weber's velocity-dependent force law were forced to confront the issue of energy conservation. One attempt to formulate a conservation law for such forces was due to Carl Neumann, who introduced a model employing retarded potentials in 1868. Rudolf Clausius quickly pointed out certain problems with the physical interpretation of Neumann's mathematical formalism. The debate between the two men continued until the 1880's and illustrates the strictures facing mathematical approaches to physical problems during this prerelativistic, pre-Maxwellian period.
NASA Technical Reports Server (NTRS)
Fu, L. S. W.
1982-01-01
Developments in fracture mechanics and elastic wave theory enhance the understanding of many physical phenomena in a mathematical context. Available literature in the material, and fracture characterization by NDT, and the related mathematical methods in mechanics that provide fundamental underlying principles for its interpretation and evaluation are reviewed. Information on the energy release mechanism of defects and the interaction of microstructures within the material is basic in the formulation of the mechanics problems that supply guidance for nondestructive evaluation (NDE).
Aerodynamic mathematical modeling - basic concepts
NASA Technical Reports Server (NTRS)
Tobak, M.; Schiff, L. B.
1981-01-01
The mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers is reviewed. Bryan's original formulation, linear aerodynamic indicial functions, and superposition are considered. These concepts are extended into the nonlinear regime. The nonlinear generalization yields a form for the aerodynamic response that can be built up from the responses to a limited number of well defined characteristic motions, reproducible in principle either in wind tunnel experiments or flow field computations. A further generalization leads to a form accommodating the discontinuous and double valued behavior characteristics of hysteresis in the steady state aerodynamic response.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.
An analytical approach to top predator interference on the dynamics of a food chain model
NASA Astrophysics Data System (ADS)
Senthamarai, R.; Vijayalakshmi, T.
2018-04-01
In this paper, a nonlinear mathematical model is proposed and analyzed to study of top predator interference on the dynamics of a food chain model. The mathematical model is formulated using the system of non-linear ordinary differential equations. In this model, there are three state dimensionless variables, viz, size of prey population x, size of intermediate predator y and size of top predator population z. The analytical results are compared with the numerical simulation using MATLAB software and satisfactory results are noticed.