Sample records for unified mathematical model

  1. A Unified Mathematical Definition of Classical Information Retrieval.

    ERIC Educational Resources Information Center

    Dominich, Sandor

    2000-01-01

    Presents a unified mathematical definition for the classical models of information retrieval and identifies a mathematical structure behind relevance feedback. Highlights include vector information retrieval; probabilistic information retrieval; and similarity information retrieval. (Contains 118 references.) (Author/LRW)

  2. Using a Functional Model to Develop a Mathematical Formula

    ERIC Educational Resources Information Center

    Otto, Charlotte A.; Everett, Susan A.; Luera, Gail R.

    2008-01-01

    The unifying theme of models was incorporated into a required Science Capstone course for pre-service elementary teachers based on national standards in science and mathematics. A model of a teeter-totter was selected for use as an example of a functional model for gathering data as well as a visual model of a mathematical equation for developing…

  3. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    NASA Technical Reports Server (NTRS)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  4. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  5. The Vector Space as a Unifying Concept in School Mathematics.

    ERIC Educational Resources Information Center

    Riggle, Timothy Andrew

    The purpose of this study was to show how the concept of vector space can serve as a unifying thread for mathematics programs--elementary school to pre-calculus college level mathematics. Indicated are a number of opportunities to demonstrate how emphasis upon the vector space structure can enhance the organization of the mathematics curriculum.…

  6. Using the Unified Modelling Language (UML) to guide the systemic description of biological processes and systems.

    PubMed

    Roux-Rouquié, Magali; Caritey, Nicolas; Gaubert, Laurent; Rosenthal-Sabroux, Camille

    2004-07-01

    One of the main issues in Systems Biology is to deal with semantic data integration. Previously, we examined the requirements for a reference conceptual model to guide semantic integration based on the systemic principles. In the present paper, we examine the usefulness of the Unified Modelling Language (UML) to describe and specify biological systems and processes. This makes unambiguous representations of biological systems, which would be suitable for translation into mathematical and computational formalisms, enabling analysis, simulation and prediction of these systems behaviours.

  7. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  8. Performance modeling of automated manufacturing systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  9. Towards new-generation soil erosion modeling: Building a unified omnivorous model

    USDA-ARS?s Scientific Manuscript database

    Soil erosion is a global threat to agricultural production, and results in off-site sediment and nutrient losses that negatively impact water and air quality. Models are mathematical equations used to estimate the amount of soil lost from a land air, due to the erosive forces of water or wind. Early...

  10. A Model of E-Learning Uptake and Continued Use in Higher Education Institutions

    ERIC Educational Resources Information Center

    Pinpathomrat, Nakarin; Gilbert, Lester; Wills, Gary B.

    2013-01-01

    This research investigates the factors that affect a students' take-up and continued use of E-learning. A mathematical model was constructed by applying three grounded theories; Unified Theory of Acceptance and Use of Technology, Keller's ARCS model, and Expectancy Disconfirm Theory. The learning preference factor was included in the model.…

  11. Statistical power for nonequivalent pretest-posttest designs. The impact of change-score versus ANCOVA models.

    PubMed

    Oakes, J M; Feldman, H A

    2001-02-01

    Nonequivalent controlled pretest-posttest designs are central to evaluation science, yet no practical and unified approach for estimating power in the two most widely used analytic approaches to these designs exists. This article fills the gap by presenting and comparing useful, unified power formulas for ANCOVA and change-score analyses, indicating the implications of each on sample-size requirements. The authors close with practical recommendations for evaluators. Mathematical details and a simple spreadsheet approach are included in appendices.

  12. Unified reduction principle for the evolution of mutation, migration, and recombination

    PubMed Central

    Altenberg, Lee; Liberman, Uri; Feldman, Marcus W.

    2017-01-01

    Modifier-gene models for the evolution of genetic information transmission between generations of organisms exhibit the reduction principle: Selection favors reduction in the rate of variation production in populations near equilibrium under a balance of constant viability selection and variation production. Whereas this outcome has been proven for a variety of genetic models, it has not been proven in general for multiallelic genetic models of mutation, migration, and recombination modification with arbitrary linkage between the modifier and major genes under viability selection. We show that the reduction principle holds for all of these cases by developing a unifying mathematical framework that characterizes all of these evolutionary models. PMID:28265103

  13. Tensor Arithmetic, Geometric and Mathematic Principles of Fluid Mechanics in Implementation of Direct Computational Experiments

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Khramushin, Vasily

    2016-02-01

    The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.

  14. Oakland and San Francisco Create Course Pathways through Common Core Mathematics. White Paper

    ERIC Educational Resources Information Center

    Daro, Phil

    2014-01-01

    The Common Core State Standards for Mathematics (CCSS-M) set rigorous standards for each of grades 6, 7 and 8. Strategic Education Research Partnership (SERP) has been working with two school districts, Oakland Unified School District and San Francisco Unified School District, to evaluate extant policies and practices and formulate new policies…

  15. Project CLIMB.

    ERIC Educational Resources Information Center

    DeLucca, Adolph

    1982-01-01

    As a state and national model for a basic skills curriculum for Kindergarten through grade 12 students, Coordination Learning Integration--Middlesex Basics (Project CLIMB) is described. The unified system was developed by teachers with administrative support to accomodate all students' reading and mathematics needs. Project CLIMB's development and…

  16. Polynomial algebra of discrete models in systems biology.

    PubMed

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard

    2010-07-01

    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.

  17. Comparison of mathematical models of fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by M. Ben Amar and C. Bianca

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida

    2016-07-01

    Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.

  18. Simulating the evolution of non-point source pollutants in a shallow water environment.

    PubMed

    Yan, Min; Kahawita, Rene

    2007-03-01

    Non-point source pollution originating from surface applied chemicals in either liquid or solid form as part of agricultural activities, appears in the surface runoff caused by rainfall. The infiltration and transport of these pollutants has a significant impact on subsurface and riverine water quality. The present paper describes the development of a unified 2-D mathematical model incorporating individual models for infiltration, adsorption, solubility rate, advection and diffusion, which significantly improve the current practice on mathematical modeling of pollutant evolution in shallow water. The governing equations have been solved numerically using cubic spline integration. Experiments were conducted at the Hydrodynamics Laboratory of the Ecole Polytechnique de Montreal to validate the mathematical model. Good correspondence between the computed results and experimental data has been obtained. The model may be used to predict the ultimate fate of surface applied chemicals by evaluating the proportions that are dissolved, infiltrated into the subsurface or are washed off.

  19. Unifying models of dialect spread and extinction using surface tension dynamics

    PubMed Central

    2018-01-01

    We provide a unified mathematical explanation of two classical forms of spatial linguistic spread. The wave model describes the radiation of linguistic change outwards from a central focus. Changes can also jump between population centres in a process known as hierarchical diffusion. It has recently been proposed that the spatial evolution of dialects can be understood using surface tension at linguistic boundaries. Here we show that the inclusion of long-range interactions in the surface tension model generates both wave-like spread, and hierarchical diffusion, and that it is surface tension that is the dominant effect in deciding the stable distribution of dialect patterns. We generalize the model to allow population mixing which can induce shrinkage of linguistic domains, or destroy dialect regions from within. PMID:29410847

  20. A Unified Model of Geostrophic Adjustment and Frontogenesis

    NASA Astrophysics Data System (ADS)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  1. A general modeling framework for describing spatially structured population dynamics

    USGS Publications Warehouse

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles

  2. Discrete Mathematics across the Curriculum, K-12. 1991 Yearbook.

    ERIC Educational Resources Information Center

    Kenney, Margaret J., Ed.; Hirsch, Christian R., Ed.

    This yearbook provides the mathematics education community with specific perceptions about discrete mathematics concerning its importance, its composition at various grade levels, and ideas about how to teach it. Many practical suggestions with respect to the implementation of a discrete mathematics school program are included. A unifying thread…

  3. SECONDARY SCHOOL MATHEMATICS CURRICULUM IMPROVEMENT STUDY. FINAL REPORT.

    ERIC Educational Resources Information Center

    FEHR, HOWARD F.

    THIS SECONDARY SCHOOL MATHEMATICS CURRICULUM IMPROVEMENT STUDY GROUP (SSMCIS), COMPOSED OF BOTH AMERICAN AND EUROPEAN EDUCATORS, WAS GUIDED BY TWO MAIN OBJECTIVES--(1) TO CONSTRUCT AND EVALUATE A UNIFIED SECONDARY SCHOOL MATHEMATICS PROGRAM FOR GRADES 7-12 THAT WOULD TAKE THE CAPABLE STUDENT WELL INTO CURRENT COLLEGE MATHEMATICS, AND (2) DETERMINE…

  4. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium states. The mathematical frameworks we consider are the following: (A) statistical or information-theoretic models of relaxation; (B) small-scale and rarefied gas dynamics (i.e., kinetic models for the Boltzmann equation); (C) rational extended thermodynamics, macroscopic nonequilibrium thermodynamics, and chemical kinetics; (D) mesoscopic nonequilibrium thermodynamics, continuum mechanics with fluctuations; and (E) quantum statistical mechanics, quantum thermodynamics, mesoscopic nonequilibrium quantum thermodynamics, and intrinsic quantum thermodynamics.

  5. Standard representation and unified stability analysis for dynamic artificial neural network models.

    PubMed

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  6. Workload capacity spaces: a unified methodology for response time measures of efficiency as workload is varied.

    PubMed

    Townsend, James T; Eidels, Ami

    2011-08-01

    Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller's race model bound, Grice's bound, and Townsend's capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius-Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided.

  7. Apollo experience report: S-band system signal design and analysis

    NASA Technical Reports Server (NTRS)

    Rosenberg, H. R. (Editor)

    1972-01-01

    A description is given of the Apollo communications-system engineering-analysis effort that ensured the adequacy, performance, and interface compatibility of the unified S-band system elements for a successful lunar-landing mission. The evolution and conceptual design of the unified S-band system are briefly reviewed from a historical viewpoint. A comprehensive discussion of the unified S-band elements includes the salient design features of the system and serves as a basis for a better understanding of the design decisions and analyses. The significant design decisions concerning the Apollo communications-system signal design are discussed providing an insight into the role of systems analysis in arriving at the current configuration of the Apollo communications system. Analyses are presented concerning performance estimation (mathematical-model development through real-time mission support) and system deficiencies, modifications, and improvements.

  8. Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.

    ERIC Educational Resources Information Center

    Secondary School Mathematics Curriculum Improvement Study, New York, NY.

    The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…

  9. Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making

    USGS Publications Warehouse

    Williams, B.K.; Nichols, J.D.; Conroy, M.J.

    2002-01-01

    This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples

  10. Agent based simulations in disease modeling Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Pappalardo, Francesco; Pennisi, Marzio

    2016-07-01

    Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.

  11. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2004-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made substantial progress toward the goal of producing a unified model of the basic physical processes responsible for solar wind acceleration. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a 1D model of plasma heating and acceleration. The accomplishments in Year 2 are divided into these two categories: 1a. Focused Study of Kinetic Magnetohydrodynamic (MHD) Turbulence. lb. Focused Study of Non - WKB Alfven Wave Rejection. and 2. The Unified Model Code. We have continued the development of the computational model of a time-study open flux tube in the extended corona. The proton-electron Monte Carlo model is being tested, and collisionless wave-particle interactions are being included. In order to better understand how to easily incorporate various kinds of wave-particle processes into the code, the PI performed a detailed study of the so-called "Ito Calculus", i.e., the mathematical theory of how to update the positions of particles in a probabilistic manner when their motions are governed by diffusion in velocity space.

  12. A mathematical model for jet engine combustor pollutant emissions

    NASA Technical Reports Server (NTRS)

    Boccio, J. L.; Weilerstein, G.; Edelman, R. B.

    1973-01-01

    Mathematical modeling for the description of the origin and disposition of combustion-generated pollutants in gas turbines is presented. A unified model in modular form is proposed which includes kinetics, recirculation, turbulent mixing, multiphase flow effects, swirl and secondary air injection. Subelements of the overall model were applied to data relevant to laboratory reactors and practical combustor configurations. Comparisons between the theory and available data show excellent agreement for basic CO/H2/Air chemical systems. For hydrocarbons the trends are predicted well including higher-than-equilibrium NO levels within the fuel rich regime. Although the need for improved accuracy in fuel rich combustion is indicated, comparisons with actual jet engine data in terms of the effect of combustor-inlet temperature is excellent. In addition, excellent agreement with data is obtained regarding reduced NO emissions with water droplet and steam injection.

  13. Unified Technical Concepts. Math for Technicians.

    ERIC Educational Resources Information Center

    Center for Occupational Research and Development, Inc., Waco, TX.

    Unified Technical Concepts (UTC) is a modular system for teaching applied physics in two-year postsecondary technician programs. This UTC classroom textbook, consisting of 10 chapters, deals with mathematical concepts as they apply to the study of physics. Addressed in the individual chapters of the text are the following topics: angles and…

  14. [METHODS OF MATHEMATICAL MODELING IN MORPHOLOGICAL DIAGNOSTICS OF CHORNOBYL FACTOR INFLUENCE ON PROSTATE GLAND OF COAL MINERS-- THE CHERNOBYL DISASTER FIGHTERS].

    PubMed

    Danylov, Iu V; Motkov, K V; Shevchenko, T I

    2014-01-01

    The morphometric estimation of parenchyma and stroma condition included the determination of 25 parameters in a prostate gland at 27 persons. The mathematical model of morphogenesis of prostate gland was created by Bayes' method. The method of differential diagnosis of a prostate gland tissues' changes conditioned by the influence of the Chernobyl factor and/or unfavorable terms of the work in underground coal mines have been worked out. Its practical use provides exactness and reliability of the diagnosis (not less than 95%), independence from the level of the qualification and personal experience of the doctor, allows us to unify, optimize and individualize the diagnostic algorithms, answer the requirements of evidential medicine.

  15. [Methods of mathematical modeling in morphological diagnostics of Chernobyl factor influence on the testes of coal miners of Donbas--the Chernobyl disaster fighters].

    PubMed

    Danylov, Iu V; Motkov, K V; Shevchenko, T I

    2014-01-01

    The morphometric estimation of parenchyma and stroma condition included the determination of 29 parameters in testicles at 27 persons. The mathematical model of morphogenesis of testicles was created by Bayes' method. The method of differential diagnosis of testicles tissues' changes conditioned by the influence of the Chernobyl factor and/or unfavorable terms of the work in underground coal mines have been worked out. Its practical use provides exactness and reliability of the diagnosis (not less than 95%), independence from the level of the qualification and personal experience of the doctor, allows us to unify, optimize and individualize the diagnostic algorithms, answer the requirements of evidential medicine.

  16. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  17. The First Sourcebook on Nordic Research in Mathematics Education: Norway, Sweden, Iceland, Denmark and Contributions from Finland.

    ERIC Educational Resources Information Center

    Sriraman, Bharath, Ed.; Bergsten, Christer, Ed.; Goodchild, Simon, Ed.; Palsdottir, Gudbjorg, Ed.; Sondergaard, Bettina Dahl, Ed.; Haapasalo, Lenni, Ed.

    2010-01-01

    The First Sourcebook on Nordic Research in Mathematics Education: Norway, Sweden, Iceland, Denmark and contributions from Finland provides the first comprehensive and unified treatment of historical and contemporary research trends in mathematics education in the Nordic world. The book is organized in sections co-ordinated by active researchers in…

  18. Functions in the Secondary School Mathematics Curriculum

    ERIC Educational Resources Information Center

    Denbel, Dejene Girma

    2015-01-01

    Functions are used in every branch of mathematics, as algebraic operations on numbers, transformations on points in the plane or in space, intersection and union of pairs of sets, and so forth. Function is a unifying concept in all mathematics. Relationships among phenomena in everyday life, such as the relationship between the speed of a car and…

  19. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  20. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  1. Quanta of geometry and unification

    NASA Astrophysics Data System (ADS)

    Chamseddine, Ali H.

    2016-11-01

    This is a tribute to Abdus Salam’s memory whose insight and creative thinking set for me a role model to follow. In this contribution I show that the simple requirement of volume quantization in spacetime (with Euclidean signature) uniquely determines the geometry to be that of a noncommutative space whose finite part is based on an algebra that leads to Pati-Salam grand unified models. The Standard Model corresponds to a special case where a mathematical constraint (order one condition) is satisfied. This provides evidence that Salam was a visionary who was generations ahead of his time.

  2. Quanta of Geometry and Unification

    NASA Astrophysics Data System (ADS)

    Chamseddine, Ali H.

    This is a tribute to Abdus Salam's memory whose insight and creative thinking set for me a role model to follow. In this contribution I show that the simple requirement of volume quantization in space-time (with Euclidean signature) uniquely determines the geometry to be that of a noncommutative space whose finite part is based on an algebra that leads to Pati-Salam grand unified models. The Standard Model corresponds to a special case where a mathematical constraint (order one condition) is satisfied. This provides evidence that Salam was a visionary who was generations ahead of his time.

  3. Using a Technology-Supported Approach to Preservice Teachers' Multirepresentational Fluency: Unifying Mathematical Concepts and Their Representations

    ERIC Educational Resources Information Center

    McGee, Daniel; Moore-Russo, Deborah

    2015-01-01

    A test project at the University of Puerto Rico in Mayagüez used GeoGebra applets to promote the concept of multirepresentational fluency among high school mathematics preservice teachers. For this study, this fluency was defined as simultaneous awareness of all representations associated with a mathematical concept, as measured by the ability to…

  4. The anchoring bias reflects rational use of cognitive resources.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D

    2018-02-01

    Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind's bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.

  5. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  6. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562

  7. Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search.

    PubMed

    Crupi, Vincenzo; Nelson, Jonathan D; Meder, Björn; Cevolani, Gustavo; Tentori, Katya

    2018-06-17

    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. Copyright © 2018 Cognitive Science Society, Inc.

  8. Tropical geometry of statistical models.

    PubMed

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.

  9. Focus Group Research on the Implications of Adopting the Unified English Braille Code

    ERIC Educational Resources Information Center

    Wetzel, Robin; Knowlton, Marie

    2006-01-01

    Five focus groups explored concerns about adopting the Unified English Braille Code. The consensus was that while the proposed changes to the literary braille code would be minor, those to the mathematics braille code would be much more extensive. The participants emphasized that "any code that reduces the number of individuals who can access…

  10. Studies of Braille Reading Rates and Implications for the Unified English Braille Code

    ERIC Educational Resources Information Center

    Wetzel, Robin; Knowlton, Marie

    2006-01-01

    Reading rate data was collected from both print and braille readers in the areas of mathematics and literary braille. Literary braille data was collected for contracted and uncontracted braille text with dropped whole-word contractions and part-word contractions as they would appear in the Unified English Braille Code. No significant differences…

  11. Transferring Standard English Braille Skills to the Unified English Braille Code: A Pilot Study

    ERIC Educational Resources Information Center

    Steinman, Bernard A.; Kimbrough, B. T.; Johnson, Franklin; LeJeune, B. J.

    2004-01-01

    The enormously complex and sometimes controversial project to unify the traditional literary Braille code used in English-speaking countries with the technical and mathematical codes authorized by the Braille Authority of North America (BANA) and the Braille Authority of the United Kingdom (BAUK) proposes to change English Grade Two Braille on a…

  12. The Shapes of Tomorrow.

    ERIC Educational Resources Information Center

    Vermont Univ., Burlington.

    This book, written by classroom teachers, introduces the application of secondary school mathematics to space exploration, and is intended to unify science and mathematics. In early chapters geometric concepts are used with general concepts of space and rough approximations of space measurements. Later, these concepts are refined to include the…

  13. The Mathematics of Starry Nights

    ERIC Educational Resources Information Center

    Barman, Farshad

    2008-01-01

    The mathematics for finding and plotting the locations of stars and constellations are available in many books on astronomy, but the steps involve mystifying and fragmented equations, calculations, and terminology. This paper will introduce an entirely new unified and cohesive technique that is easy to understand by mathematicians, and simple…

  14. RT-18: Value of Flexibility. Phase 1

    DTIC Science & Technology

    2010-09-25

    an analytical framework based on sound mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory...framework that is mathematically consistent, domain independent and applicable under varying information levels. This report presents our advances in...During this period, we also explored the development of an analytical framework based on sound mathematical constructs. A review of the current state

  15. Value of Flexibility - Phase 1

    DTIC Science & Technology

    2010-09-25

    weaknesses of each approach. During this period, we also explored the development of an analytical framework based on sound mathematical constructs... mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory or guidance on best approaches to...research activities is in developing a coherent value based definition of flexibility that is based on an analytical framework that is mathematically

  16. Unified Static and Dynamic Recrystallization Model for the Minerals of Earth's Mantle Using Internal State Variable Model

    NASA Astrophysics Data System (ADS)

    Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.

    2017-12-01

    In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.

  17. A unified view of "how allostery works".

    PubMed

    Tsai, Chung-Jung; Nussinov, Ruth

    2014-02-01

    The question of how allostery works was posed almost 50 years ago. Since then it has been the focus of much effort. This is for two reasons: first, the intellectual curiosity of basic science and the desire to understand fundamental phenomena, and second, its vast practical importance. Allostery is at play in all processes in the living cell, and increasingly in drug discovery. Many models have been successfully formulated, and are able to describe allostery even in the absence of a detailed structural mechanism. However, conceptual schemes designed to qualitatively explain allosteric mechanisms usually lack a quantitative mathematical model, and are unable to link its thermodynamic and structural foundations. This hampers insight into oncogenic mutations in cancer progression and biased agonists' actions. Here, we describe how allostery works from three different standpoints: thermodynamics, free energy landscape of population shift, and structure; all with exactly the same allosteric descriptors. This results in a unified view which not only clarifies the elusive allosteric mechanism but also provides structural grasp of agonist-mediated signaling pathways, and guides allosteric drug discovery. Of note, the unified view reasons that allosteric coupling (or communication) does not determine the allosteric efficacy; however, a communication channel is what makes potential binding sites allosteric.

  18. Energy Transfer and a Recurring Mathematical Function

    ERIC Educational Resources Information Center

    Atkin, Keith

    2013-01-01

    This paper extends the interesting work of a previous contributor concerning the analogies between physical phenomena such as mechanical collisions and the transfer of power in an electric circuit. Emphasis is placed on a mathematical function linking these different areas of physics. This unifying principle is seen as an exciting opportunity to…

  19. Neurotech for Neuroscience: Unifying Concepts, Organizing Principles, and Emerging Tools

    PubMed Central

    Silver, Rae; Boahen, Kwabena; Grillner, Sten; Kopell, Nancy; Olsen, Kathie L.

    2012-01-01

    The ability to tackle analysis of the brain at multiple levels simultaneously is emerging from rapid methodological developments. The classical research strategies of “measure,” “model,” and “make” are being applied to the exploration of nervous system function. These include novel conceptual and theoretical approaches, creative use of mathematical modeling, and attempts to build brain-like devices and systems, as well as other developments including instrumentation and statistical modeling (not covered here). Increasingly, these efforts require teams of scientists from a variety of traditional scientific disciplines to work together. The potential of such efforts for understanding directed motor movement, emergence of cognitive function from neuronal activity, and development of neuromimetic computers are described by a team that includes individuals experienced in behavior and neuroscience, mathematics, and engineering. Funding agencies, including the National Science Foundation, explore the potential of these changing frontiers of research for developing research policies and long-term planning. PMID:17978017

  20. The Unified English Braille Code: Examination by Science, Mathematics, and Computer Science Technical Expert Braille Readers

    ERIC Educational Resources Information Center

    Holbrook, M. Cay; MacCuspie, P. Ann

    2010-01-01

    Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…

  1. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    NASA Astrophysics Data System (ADS)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  2. Traffic Flow - USMES Teacher Resource Book. Fourth Edition. Trial Edition.

    ERIC Educational Resources Information Center

    Keskulla, Jean

    This Unified Sciences and Mathematics for Elementary Schools (USMES) unit challenges students to improve traffic flow at a problem location. The challenge is general enough to apply to many problem-solving situations in mathematics, science, social science, and language arts at any elementary school level (grades 1-8). The Teacher Resource Book…

  3. Pedestrian Crossings - USMES Teacher Resource Book. Fifth Edition. Trial Edition.

    ERIC Educational Resources Information Center

    Keskulla, Jean

    This Unified Sciences and Mathematics for Elementary Schools (USMES) unit challenges students to improve the safety and convenience of a pedestrian crossing near a school. The challenge is general enough to apply to many problem-solving situations in mathematics, science, social science, and language arts at any elementary school level (grades…

  4. Applications of Dirac's Delta Function in Statistics

    ERIC Educational Resources Information Center

    Khuri, Andre

    2004-01-01

    The Dirac delta function has been used successfully in mathematical physics for many years. The purpose of this article is to bring attention to several useful applications of this function in mathematical statistics. Some of these applications include a unified representation of the distribution of a function (or functions) of one or several…

  5. Protecting Property - USMES Teacher Resource Book. First Edition. Trial Edition.

    ERIC Educational Resources Information Center

    Bussey, Margery Koo

    This Unified Sciences and Mathematics for Elementary Schools (USMES) unit challenges students to find good ways to protect property (property in desks or lockers; animals; bicycles; tools). The challenge is general enough to apply to many problem-solving situations in mathematics, science, social science, and language arts at any elementary school…

  6. The Secondary School Mathematics Curriculum Improvement Study Goals-The Subject Matter-Accomplishments

    ERIC Educational Resources Information Center

    Fehr, Howard F.

    1970-01-01

    Describes an experimental study attempting to construct a unified school mathematics curriculum for grades seven through twelve. Study was initiated in 1965 and is to be a six-year study. The total program includes, in the following order, syllabus writing, conferences, writing of experimental textbook, education of classroom teachers, pilot class…

  7. Manufacturing - USMES Teacher Resource Book. Second Edition. Trial Edition.

    ERIC Educational Resources Information Center

    Agro, Sally

    This Unified Sciences and Mathematics for Elementary Schools (USMES) unit challenges students to find the best way to produce an item in quantities needed. The challenge is general enough to apply to many problem-solving situations in mathematics, science, social science, and language arts at any elementary school level (grades 1-8). The Teacher…

  8. Mathematics: PROJECT DESIGN. Educational Needs, Fresno, 1968, Number 12.

    ERIC Educational Resources Information Center

    Smart, James R.

    This report examines and summarizes the needs in mathematics of the Fresno City school system. The study is one in a series of needs assessment reports for PROJECT DESIGN, an ESEA Title III project administered by the Fresno City Unified School District. Theoretical concepts, rather than computational drill, would be emphasized in the proposed…

  9. A Unifying Mechanistic Model of Selective Attention in Spiking Neurons

    PubMed Central

    Bobier, Bruce; Stewart, Terrence C.; Eliasmith, Chris

    2014-01-01

    Visuospatial attention produces myriad effects on the activity and selectivity of cortical neurons. Spiking neuron models capable of reproducing a wide variety of these effects remain elusive. We present a model called the Attentional Routing Circuit (ARC) that provides a mechanistic description of selective attentional processing in cortex. The model is described mathematically and implemented at the level of individual spiking neurons, with the computations for performing selective attentional processing being mapped to specific neuron types and laminar circuitry. The model is used to simulate three studies of attention in macaque, and is shown to quantitatively match several observed forms of attentional modulation. Specifically, ARC demonstrates that with shifts of spatial attention, neurons may exhibit shifting and shrinking of receptive fields; increases in responses without changes in selectivity for non-spatial features (i.e. response gain), and; that the effect on contrast-response functions is better explained as a response-gain effect than as contrast-gain. Unlike past models, ARC embodies a single mechanism that unifies the above forms of attentional modulation, is consistent with a wide array of available data, and makes several specific and quantifiable predictions. PMID:24921249

  10. Designing for Human Proportions - USMES Teacher Resource Book. Fourth Edition. Trial Edition.

    ERIC Educational Resources Information Center

    Bussey, Margery Koo

    Designing or making changes in things students use or wear is the challenge of this Unified Sciences and Mathematics for Elementary Schools (USMES) unit. The challenge is general enough to apply to many problem-solving situations in mathematics, science, social science, and language arts at any elementary school level (grades 1-8). The Teacher…

  11. Comprehensive Instructional Management System (CIMS). A Cyclical Mathematics Curriculum. Workbook Part 2. Experimental. Level K.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Div. of Curriculum and Instruction.

    This document is part 2 of the workbook for kindergarten pupils in the Comprehensive Instructional Management System, a unified mathematics curriculum for kindergarten through grade 7. Each objective is developed by a variety of strategies, with mastery of objectives diagnosed through a testing component. The activities in the student workbook are…

  12. Research of MPPT for photovoltaic generation based on two-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Fan, Wei

    2013-03-01

    The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.

  13. Investigation of the Jet Noise Prediction Theory and Application Utilizing the PAO Formulation. [mathematical model for calculating noise radiation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Application of the Phillips theory to engineering calculations of rocket and high speed jet noise radiation is reported. Presented are a detailed derivation of the theory, the composition of the numerical scheme, and discussions of the practical problems arising in the application of the present noise prediction method. The present method still contains some empirical elements, yet it provides a unified approach in the prediction of sound power, spectrum, and directivity.

  14. Yangians in Integrable Field Theories, Spin Chains and Gauge-String Dualities

    NASA Astrophysics Data System (ADS)

    Spill, Fabian

    In the following paper, which is based on the author's PhD thesis submitted to Imperial College London, we explore the applicability of Yangian symmetry to various integrable models, in particular, in relation with S-matrices. One of the main themes in this work is that, after a careful study of the mathematics of the symmetry algebras one finds that in an integrable model, one can directly reconstruct S-matrices just from the algebra. It has been known for a long time that S-matrices in integrable models are fixed by symmetry. However, Lie algebra symmetry, the Yang-Baxter equation, crossing and unitarity, which constrain the S-matrix in integrable models, are often taken to be separate, independent properties of the S-matrix. Here, we construct scattering matrices purely from the Yangian, showing that the Yangian is the right algebraic object to unify all required symmetries of many integrable models. In particular, we reconstruct the S-matrix of the principal chiral field, and, up to a CDD factor, of other integrable field theories with 𝔰𝔲(n) symmetry. Furthermore, we study the AdS/CFT correspondence, which is also believed to be integrable in the planar limit. We reconstruct the S-matrices at weak and at strong coupling from the Yangian or its classical limit. We give a pedagogical introduction into the subject, presenting a unified perspective of Yangians and their applications in physics. This paper should hence be accessible to mathematicians who would like to explore the application of algebraic objects to physics as well as to physicists interested in a deeper understanding of the mathematical origin of physical quantities.

  15. Revisiting Dosing Regimen Using Pharmacokinetic/Pharmacodynamic Mathematical Modeling: Densification and Intensification of Combination Cancer Therapy.

    PubMed

    Meille, Christophe; Barbolosi, Dominique; Ciccolini, Joseph; Freyer, Gilles; Iliadis, Athanassios

    2016-08-01

    Controlling effects of drugs administered in combination is particularly challenging with a densified regimen because of life-threatening hematological toxicities. We have developed a mathematical model to optimize drug dosing regimens and to redesign the dose intensification-dose escalation process, using densified cycles of combined anticancer drugs. A generic mathematical model was developed to describe the main components of the real process, including pharmacokinetics, safety and efficacy pharmacodynamics, and non-hematological toxicity risk. This model allowed for computing the distribution of the total drug amount of each drug in combination, for each escalation dose level, in order to minimize the average tumor mass for each cycle. This was achieved while complying with absolute neutrophil count clinical constraints and without exceeding a fixed risk of non-hematological dose-limiting toxicity. The innovative part of this work was the development of densifying and intensifying designs in a unified procedure. This model enabled us to determine the appropriate regimen in a pilot phase I/II study in metastatic breast patients for a 2-week-cycle treatment of docetaxel plus epirubicin doublet, and to propose a new dose-ranging process. In addition to the present application, this method can be further used to achieve optimization of any combination therapy, thus improving the efficacy versus toxicity balance of such a regimen.

  16. Generic model of morphological changes in growing colonies of fungi

    NASA Astrophysics Data System (ADS)

    López, Juan M.; Jensen, Henrik J.

    2002-02-01

    Fungal colonies are able to exhibit different morphologies depending on the environmental conditions. This allows them to cope with and adapt to external changes. When grown in solid or semisolid media the bulk of the colony is compact and several morphological transitions have been reported to occur as the external conditions are varied. Here we show how a unified simple mathematical model, which includes the effect of the accumulation of toxic metabolites, can account for the morphological changes observed. Our numerical results are in excellent agreement with experiments carried out with the fungus Aspergillus oryzae on solid agar.

  17. Probabilistic models of cognition: conceptual foundations.

    PubMed

    Chater, Nick; Tenenbaum, Joshua B; Yuille, Alan

    2006-07-01

    Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, 'sophisticated' probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today.

  18. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation

    PubMed Central

    Meyer, Arne F.; Williamson, Ross S.; Linden, Jennifer F.; Sahani, Maneesh

    2017-01-01

    Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available. PMID:28127278

  19. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-off on Phenotype Robustness in Biological Networks Part I: Gene Regulatory Networks in Systems and Evolutionary Biology

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties observed in biological systems at different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be enough to confer intrinsic robustness in order to tolerate intrinsic parameter fluctuations, genetic robustness for buffering genetic variations, and environmental robustness for resisting environmental disturbances. With this, the phenotypic stability of biological network can be maintained, thus guaranteeing phenotype robustness. This paper presents a survey on biological systems and then develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation in systems and evolutionary biology. Further, from the unifying mathematical framework, it was discovered that the phenotype robustness criterion for biological networks at different levels relies upon intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness. When this is true, the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in systems and evolutionary biology can also be investigated through their corresponding phenotype robustness criterion from the systematic point of view. PMID:23515240

  20. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-off on Phenotype Robustness in Biological Networks Part I: Gene Regulatory Networks in Systems and Evolutionary Biology.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties observed in biological systems at different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be enough to confer intrinsic robustness in order to tolerate intrinsic parameter fluctuations, genetic robustness for buffering genetic variations, and environmental robustness for resisting environmental disturbances. With this, the phenotypic stability of biological network can be maintained, thus guaranteeing phenotype robustness. This paper presents a survey on biological systems and then develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation in systems and evolutionary biology. Further, from the unifying mathematical framework, it was discovered that the phenotype robustness criterion for biological networks at different levels relies upon intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness. When this is true, the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in systems and evolutionary biology can also be investigated through their corresponding phenotype robustness criterion from the systematic point of view.

  1. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  2. Temporal characteristics of botulinum neurotoxin therapy

    PubMed Central

    Lebeda, Frank J; Cer, Regina Z; Stephens, Robert M; Mudunuri, Uma

    2010-01-01

    Botulinum neurotoxin is a pharmaceutical treatment used for an increasing number of neurological and non-neurological indications, symptoms and diseases. Despite the wealth of clinical reports that involve the timing of the therapeutic effects of this toxin, few studies have attempted to integrate these data into unified models. Secondary reactions have also been examined including the development of adverse events, resistance to repeated applications, and nerve terminal sprouting. Our primary intent for conducting this review was to gather relevant pharmacodynamic data from suitable biomedical literature regarding botulinum neurotoxins via the use of automated data-mining techniques. We envision that mathematical models will ultimately be of value to those who are healthcare decision makers and providers, as well as clinical and basic researchers. Furthermore, we hypothesize that the combination of this computer-intensive approach with mathematical modeling will predict the percentage of patients who will favorably or adversely respond to this treatment and thus will eventually assist in developing the increasingly important area of personalized medicine. PMID:20021324

  3. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  4. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  5. Applying multibeam sonar and mathematical modeling for mapping seabed substrate and biota of offshore shallows

    NASA Astrophysics Data System (ADS)

    Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander

    2017-06-01

    Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.

  6. Introductory science and mathematics education for 21st-Century biologists.

    PubMed

    Bialek, William; Botstein, David

    2004-02-06

    Galileo wrote that "the book of nature is written in the language of mathematics"; his quantitative approach to understanding the natural world arguably marks the beginning of modern science. Nearly 400 years later, the fragmented teaching of science in our universities still leaves biology outside the quantitative and mathematical culture that has come to define the physical sciences and engineering. This strikes us as particularly inopportune at a time when opportunities for quantitative thinking about biological systems are exploding. We propose that a way out of this dilemma is a unified introductory science curriculum that fully incorporates mathematics and quantitative thinking.

  7. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques

    2016-10-01

    Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.

  8. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2016-01-01

    Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.

  9. A multiscale modelling approach to understand atherosclerosis formation: A patient-specific case study in the aortic bifurcation

    PubMed Central

    Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa

    2017-01-01

    Atherogenesis, the formation of plaques in the wall of blood vessels, starts as a result of lipid accumulation (low-density lipoprotein cholesterol) in the vessel wall. Such accumulation is related to the site of endothelial mechanotransduction, the endothelial response to mechanical stimuli and haemodynamics, which determines biochemical processes regulating the vessel wall permeability. This interaction between biomechanical and biochemical phenomena is complex, spanning different biological scales and is patient-specific, requiring tools able to capture such mathematical and biological complexity in a unified framework. Mathematical models offer an elegant and efficient way of doing this, by taking into account multifactorial and multiscale processes and mechanisms, in order to capture the fundamentals of plaque formation in individual patients. In this study, a mathematical model to understand plaque and calcification locations is presented: this model provides a strong interpretability and physical meaning through a multiscale, complex index or metric (the penetration site of low-density lipoprotein cholesterol, expressed as volumetric flux). Computed tomography scans of the aortic bifurcation and iliac arteries are analysed and compared with the results of the multifactorial model. The results indicate that the model shows potential to predict the majority of the plaque locations, also not predicting regions where plaques are absent. The promising results from this case study provide a proof of concept that can be applied to a larger patient population. PMID:28427316

  10. Toward Model Building for Visual Aesthetic Perception

    PubMed Central

    Lughofer, Edwin; Zeng, Xianyi

    2017-01-01

    Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194

  11. Multidisciplinary approaches to understanding collective cell migration in developmental biology.

    PubMed

    Schumacher, Linus J; Kulesa, Paul M; McLennan, Rebecca; Baker, Ruth E; Maini, Philip K

    2016-06-01

    Mathematical models are becoming increasingly integrated with experimental efforts in the study of biological systems. Collective cell migration in developmental biology is a particularly fruitful application area for the development of theoretical models to predict the behaviour of complex multicellular systems with many interacting parts. In this context, mathematical models provide a tool to assess the consistency of experimental observations with testable mechanistic hypotheses. In this review, we showcase examples from recent years of multidisciplinary investigations of neural crest cell migration. The neural crest model system has been used to study how collective migration of cell populations is shaped by cell-cell interactions, cell-environmental interactions and heterogeneity between cells. The wide range of emergent behaviours exhibited by neural crest cells in different embryonal locations and in different organisms helps us chart out the spectrum of collective cell migration. At the same time, this diversity in migratory characteristics highlights the need to reconcile or unify the array of currently hypothesized mechanisms through the next generation of experimental data and generalized theoretical descriptions. © 2016 The Authors.

  12. Fluid dynamics model of mitral valve flow: description with in vitro validation.

    PubMed

    Thomas, J D; Weyman, A E

    1989-01-01

    A lumped variable fluid dynamics model of mitral valve blood flow is described that is applicable to both Doppler echocardiography and invasive hemodynamic measurement. Given left atrial and ventricular compliance, initial pressures and mitral valve impedance, the model predicts the time course of mitral flow and atrial and ventricular pressure. The predictions of this mathematic formulation have been tested in an in vitro analog of the left heart in which mitral valve area and atrial and ventricular compliance can be accurately controlled. For the situation of constant chamber compliance, transmitral gradient is predicted to decay as a parabolic curve, and this has been confirmed in the in vitro model with r greater than 0.99 in all cases for a range of orifice area from 0.3 to 3.0 cm2, initial pressure gradient from 2.4 to 14.2 mm Hg and net chamber compliance from 16 to 29 cc/mm Hg. This mathematic formulation of transmitral flow should help to unify the Doppler echocardiographic and catheterization assessment of mitral stenosis and left ventricular diastolic dysfunction.

  13. Heterogeneous continuous-time random walks

    NASA Astrophysics Data System (ADS)

    Grebenkov, Denis S.; Tupikina, Liubov

    2018-01-01

    We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.

  14. Decision Support Requirements in a Unified Life Cycle Engineering (ULCE) Environment. Volume 2. Conceptual Approaches to Optimization.

    DTIC Science & Technology

    1988-05-01

    the meet ehidmli i thm e mpesm of rmbrme pap Ii bprmaeIea s, IDA Mwmaim Ampad le eI.te umm emOw casm d One IqIammeis er~ wh eMA ls is mmidsmwkdMle...in turn, is controlled by the units above it. Dynamic programming is a mathematical technique well suited for optimization of multistage models. This...interval to a desired accuracy. Several region elimination methods have been discussed in the literature, including the Golden Section, Fibonacci

  15. Finite elements of nonlinear continua.

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1972-01-01

    The finite element method is extended to a broad class of practical nonlinear problems, treating both theory and applications from a general and unifying point of view. The thermomechanical principles of continuous media and the properties of the finite element method are outlined, and are brought together to produce discrete physical models of nonlinear continua. The mathematical properties of the models are analyzed, and the numerical solution of the equations governing the discrete models is examined. The application of the models to nonlinear problems in finite elasticity, viscoelasticity, heat conduction, and thermoviscoelasticity is discussed. Other specific topics include the topological properties of finite element models, applications to linear and nonlinear boundary value problems, convergence, continuum thermodynamics, finite elasticity, solutions to nonlinear partial differential equations, and discrete models of the nonlinear thermomechanical behavior of dissipative media.

  16. On modeling of integrated communication and control systems

    NASA Technical Reports Server (NTRS)

    Liou, Luen-Woei; Ray, Asok

    1990-01-01

    The mathematical modeling scheme proposed by Ray and Halevi (1988) for integrated communication and control systems is considered analytically, with an emphasis on the effect of introducing varying and distributed time delays to account for asynchronous time-division multiplexing in the communication part of the system. Ray and Halevi applied a state-transition concept to transform the original continuous-time model into a discrete-time model; the same approach was used by Kalman and Bertram (1959) to model various types of sampled data systems which are not subject to induced delays. The relationship between the two modeling schemes is explored, and it is shown that, although the Kalman-Bertram method has the advantage of a unified approach, it becomes inconvenient when varying delays appear in the control loop.

  17. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  18. Cartan gravity, matter fields, and the gauge principle

    NASA Astrophysics Data System (ADS)

    Westman, Hans F.; Zlosnik, Tom G.

    2013-07-01

    Gravity is commonly thought of as one of the four force fields in nature. However, in standard formulations its mathematical structure is rather different from the Yang-Mills fields of particle physics that govern the electromagnetic, weak, and strong interactions. This paper explores this dissonance with particular focus on how gravity couples to matter from the perspective of the Cartan-geometric formulation of gravity. There the gravitational field is represented by a pair of variables: (1) a 'contact vector' VA which is geometrically visualized as the contact point between the spacetime manifold and a model spacetime being 'rolled' on top of it, and (2) a gauge connection AμAB, here taken to be valued in the Lie algebra of SO(2,3) or SO(1,4), which mathematically determines how much the model spacetime is rotated when rolled. By insisting on two principles, the gauge principle and polynomial simplicity, we shall show how one can reformulate matter field actions in a way that is harmonious with Cartan's geometric construction. This yields a formulation of all matter fields in terms of first order partial differential equations. We show in detail how the standard second order formulation can be recovered. In particular, the Hodge dual, which characterizes the structure of bosonic field equations, pops up automatically. Furthermore, the energy-momentum and spin-density three-forms are naturally combined into a single object here denoted the spin-energy-momentum three-form. Finally, we highlight a peculiarity in the mathematical structure of our first-order formulation of Yang-Mills fields. This suggests a way to unify a U(1) gauge field with gravity into a SO(1,5)-valued gauge field using a natural generalization of Cartan geometry in which the larger symmetry group is spontaneously broken down to SO(1,3)×U(1). The coupling of this unified theory to matter fields and possible extensions to non-Abelian gauge fields are left as open questions.

  19. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  20. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    NASA Astrophysics Data System (ADS)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.

  1. An Integrated Analysis of the Physiological Effects of Space Flight: Executive Summary

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    A large array of models were applied in a unified manner to solve problems in space flight physiology. Mathematical simulation was used as an alternative way of looking at physiological systems and maximizing the yield from previous space flight experiments. A medical data analysis system was created which consist of an automated data base, a computerized biostatistical and data analysis system, and a set of simulation models of physiological systems. Five basic models were employed: (1) a pulsatile cardiovascular model; (2) a respiratory model; (3) a thermoregulatory model; (4) a circulatory, fluid, and electrolyte balance model; and (5) an erythropoiesis regulatory model. Algorithms were provided to perform routine statistical tests, multivariate analysis, nonlinear regression analysis, and autocorrelation analysis. Special purpose programs were prepared for rank correlation, factor analysis, and the integration of the metabolic balance data.

  2. Working with Functions without Understanding: An Assessment of the Perceptions of Basotho College Mathematics Specialists on the Idea of Function

    ERIC Educational Resources Information Center

    Polaki, Mokaeane Victor

    2005-01-01

    It is a well-known fact that the idea of function plays a unifying role in the development of mathematical concepts. Yet research has shown that many students do not understand it adequately even though they have experienced a great deal of success in performing a plethora of operations on function, and on using functions to solve various types of…

  3. Space-time dynamics of Stem Cell Niches: a unified approach for Plants.

    PubMed

    Pérez, Maria Del Carmen; López, Alejandro; Padilla, Pablo

    2013-06-01

    Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.

  4. Space-time dynamics of stem cell niches: a unified approach for plants.

    PubMed

    Pérez, Maria del Carmen; López, Alejandro; Padilla, Pablo

    2013-04-02

    Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.

  5. Multiscale modelling and analysis of collective decision making in swarm robotics.

    PubMed

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.

  6. A Unified Method of Finding Laplace Transforms, Fourier Transforms, and Fourier Series. [and] An Inversion Method for Laplace Transforms, Fourier Transforms, and Fourier Series. Integral Transforms and Series Expansions. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 324 and 325.

    ERIC Educational Resources Information Center

    Grimm, C. A.

    This document contains two units that examine integral transforms and series expansions. In the first module, the user is expected to learn how to use the unified method presented to obtain Laplace transforms, Fourier transforms, complex Fourier series, real Fourier series, and half-range sine series for given piecewise continuous functions. In…

  7. The evolution of Zipf's law indicative of city development

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2016-02-01

    Zipf's law of city-size distributions can be expressed by three types of mathematical models: one-parameter form, two-parameter form, and three-parameter form. The one-parameter and one of the two-parameter models are familiar to urban scientists. However, the three-parameter model and another type of two-parameter model have not attracted attention. This paper is devoted to exploring the conditions and scopes of application of these Zipf models. By mathematical reasoning and empirical analysis, new discoveries are made as follows. First, if the size distribution of cities in a geographical region cannot be described with the one- or two-parameter model, maybe it can be characterized by the three-parameter model with a scaling factor and a scale-translational factor. Second, all these Zipf models can be unified by hierarchical scaling laws based on cascade structure. Third, the patterns of city-size distributions seem to evolve from three-parameter mode to two-parameter mode, and then to one-parameter mode. Four-year census data of Chinese cities are employed to verify the three-parameter Zipf's law and the corresponding hierarchical structure of rank-size distributions. This study is revealing for people to understand the scientific laws of social systems and the property of urban development.

  8. Standard model of knowledge representation

    NASA Astrophysics Data System (ADS)

    Yin, Wensheng

    2016-09-01

    Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.

  9. Causality

    NASA Astrophysics Data System (ADS)

    Pearl, Judea

    2000-03-01

    Written by one of the pre-eminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, philosophy, cognitive science, and the health and social sciences. Pearl presents a unified account of the probabilistic, manipulative, counterfactual and structural approaches to causation, and devises simple mathematical tools for analyzing the relationships between causal connections, statistical associations, actions and observations. The book will open the way for including causal analysis in the standard curriculum of statistics, artifical intelligence, business, epidemiology, social science and economics. Students in these areas will find natural models, simple identification procedures, and precise mathematical definitions of causal concepts that traditional texts have tended to evade or make unduly complicated. This book will be of interest to professionals and students in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.

  10. Science Education Attuned to Social Issues: Challenge for the '80s.

    ERIC Educational Resources Information Center

    Yager, Robert E.; And Others

    1981-01-01

    Provides rationale for interdisciplinary science curricula which emphasize decision-making skills. Includes examples of interdisciplinary curricula using an issue-centered approach: Unified Science and Mathematics for Elementary School (USMES), Health Activities Program (HAP), Human Sciences Program (HSP), Individualized Science Instructional…

  11. Poincarés philosophy of geometry, or does geometric conventionalism deserve its name?

    NASA Astrophysics Data System (ADS)

    Zahar, E. G.

    Two main aims are pursued in this paper. The first is to show that, in mathematical geometry, Poincaré was a conventionalist who rejected all forms of synthetic a priori geometric intuition. He moreover followed a unified heuristic based on the study of certain groups of Möbius transformations. This method was informed by his work on the theory of Fuchsian functions; it yielded two models of hyperbolic geometry: the disk model and the Poincaré half-plane, which are connected by a Möbius transformation. From these group-theoretic considerations Poincaré derived an expression for the Riemannian distance. I secondly defend the thesis that, in physical geometry, Poincaré was a structural realist whose so-called conventionalism was epistemological, not ontological. Here he started directly from a Riemannian metric together with an associated universal field. He adopted a realist attitude towards both the field and that geometry which is most coherently integrated into some highly unified and empirically confirmed hypothesis. More generally, he looked upon the degree of unity of any system as an index of its verisimilitude. I finally show that, by Einsteins own admission, GTR is compatible with Poincarés epistemological theses.

  12. Building new computational models to support health behavior change and maintenance: new opportunities in behavioral research.

    PubMed

    Spruijt-Metz, Donna; Hekler, Eric; Saranummi, Niilo; Intille, Stephen; Korhonen, Ilkka; Nilsen, Wendy; Rivera, Daniel E; Spring, Bonnie; Michie, Susan; Asch, David A; Sanna, Alberto; Salcedo, Vicente Traver; Kukakfa, Rita; Pavel, Misha

    2015-09-01

    Adverse and suboptimal health behaviors and habits are responsible for approximately 40 % of preventable deaths, in addition to their unfavorable effects on quality of life and economics. Our current understanding of human behavior is largely based on static "snapshots" of human behavior, rather than ongoing, dynamic feedback loops of behavior in response to ever-changing biological, social, personal, and environmental states. This paper first discusses how new technologies (i.e., mobile sensors, smartphones, ubiquitous computing, and cloud-enabled processing/computing) and emerging systems modeling techniques enable the development of new, dynamic, and empirical models of human behavior that could facilitate just-in-time adaptive, scalable interventions. The paper then describes concrete steps to the creation of robust dynamic mathematical models of behavior including: (1) establishing "gold standard" measures, (2) the creation of a behavioral ontology for shared language and understanding tools that both enable dynamic theorizing across disciplines, (3) the development of data sharing resources, and (4) facilitating improved sharing of mathematical models and tools to support rapid aggregation of the models. We conclude with the discussion of what might be incorporated into a "knowledge commons," which could help to bring together these disparate activities into a unified system and structure for organizing knowledge about behavior.

  13. Small perturbations in a finger-tapping task reveal inherent nonlinearities of the underlying error correction mechanism.

    PubMed

    Bavassi, M Luz; Tagliazucchi, Enzo; Laje, Rodrigo

    2013-02-01

    Time processing in the few hundred milliseconds range is involved in the human skill of sensorimotor synchronization, like playing music in an ensemble or finger tapping to an external beat. In finger tapping, a mechanistic explanation in biologically plausible terms of how the brain achieves synchronization is still missing despite considerable research. In this work we show that nonlinear effects are important for the recovery of synchronization following a perturbation (a step change in stimulus period), even for perturbation magnitudes smaller than 10% of the period, which is well below the amount of perturbation needed to evoke other nonlinear effects like saturation. We build a nonlinear mathematical model for the error correction mechanism and test its predictions, and further propose a framework that allows us to unify the description of the three common types of perturbations. While previous authors have used two different model mechanisms for fitting different perturbation types, or have fitted different parameter value sets for different perturbation magnitudes, we propose the first unified description of the behavior following all perturbation types and magnitudes as the dynamical response of a compound model with fixed terms and a single set of parameter values. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Dispersive hydrodynamics: Preface

    NASA Astrophysics Data System (ADS)

    Biondini, G.; El, G. A.; Hoefer, M. A.; Miller, P. D.

    2016-10-01

    This Special Issue on Dispersive Hydrodynamics is dedicated to the memory and work of G.B. Whitham who was one of the pioneers in this field of physical applied mathematics. Some of the papers appearing here are related to work reported on at the workshop "Dispersive Hydrodynamics: The Mathematics of Dispersive Shock Waves and Applications" held in May 2015 at the Banff International Research Station. This Preface provides a broad overview of the field and summaries of the various contributions to the Special Issue, placing them in a unified context.

  15. Mathematical correlation of modal-parameter-identification methods via system-realization theory

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    1987-01-01

    A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.

  16. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental disturbances, is also proposed, together with a simulation example. PMID:23515190

  17. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental disturbances, is also proposed, together with a simulation example.

  18. Embedding Quantum Mechanics Into a Broader Noncontextual Theory: A Conciliatory Result

    NASA Astrophysics Data System (ADS)

    Garola, Claudio; Sozzo, Sandro

    2010-12-01

    The extended semantic realism ( ESR) model embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here an improved version of this model and show that it predicts that, whenever idealized measurements are performed, a modified Bell-Clauser-Horne-Shimony-Holt ( BCHSH) inequality holds if one takes into account all individual systems that are prepared, standard quantum predictions hold if one considers only the individual systems that are detected, and a standard BCHSH inequality holds at a microscopic (purely theoretical) level. These results admit an intuitive explanation in terms of an unconventional kind of unfair sampling and constitute a first example of the unified perspective that can be attained by adopting the ESR model.

  19. Toward physics of the mind: Concepts, emotions, consciousness, and symbols

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.

    2006-03-01

    Mathematical approaches to modeling the mind since the 1950s are reviewed, including artificial intelligence, pattern recognition, and neural networks. I analyze difficulties faced by these algorithms and neural networks and relate them to the fundamental inconsistency of logic discovered by Gödel. Mathematical discussions are related to those in neurobiology, psychology, cognitive science, and philosophy. Higher cognitive functions are reviewed including concepts, emotions, instincts, understanding, imagination, intuition, consciousness. Then, I describe a mathematical formulation, unifying the mind mechanisms in a psychologically and neuro-biologically plausible system. A mechanism of the knowledge instinct drives our understanding of the world and serves as a foundation for higher cognitive functions. This mechanism relates aesthetic emotions and perception of beauty to “everyday” functioning of the mind. The article reviews mechanisms of human symbolic ability. I touch on future directions: joint evolution of the mind, language, consciousness, and cultures; mechanisms of differentiation and synthesis; a manifold of aesthetic emotions in music and differentiated instinct for knowledge. I concentrate on elucidating the first principles; review aspects of the theory that have been proven in laboratory research, relationships between the mind and brain; discuss unsolved problems, and outline a number of theoretical predictions, which will have to be tested in future mathematical simulations and neuro-biological research.

  20. Balkanization and Unification of Probabilistic Inferences

    ERIC Educational Resources Information Center

    Yu, Chong-Ho

    2005-01-01

    Many research-related classes in social sciences present probability as a unified approach based upon mathematical axioms, but neglect the diversity of various probability theories and their associated philosophical assumptions. Although currently the dominant statistical and probabilistic approach is the Fisherian tradition, the use of Fisherian…

  1. 1979 National Unified Entrance Examination for Institutions of Higher Education.

    ERIC Educational Resources Information Center

    Chinese Education, 1979

    1979-01-01

    The article presents translations of Chinese college entrance examinations in the fields of politics, Chinese language and literature, mathematics, humanities, physics, chemistry, history, geography, and English. Translations are also presented of the 1979 review syllabus for 1979 for the same subject areas. (DB)

  2. The formal Darwinism project: a mid-term report.

    PubMed

    Grafen, A

    2007-07-01

    For 8 years I have been pursuing in print an ambitious and at times highly technical programme of work, the 'Formal Darwinism Project', whose essence is to underpin and formalize the fitness optimization ideas used by behavioural ecologists, using a new kind of argument linking the mathematics of motion and the mathematics of optimization. The value of the project is to give stronger support to current practices, and at the same time sharpening theoretical ideas and suggesting principled resolutions of some untidy areas, for example, how to define fitness. The aim is also to unify existing free-standing theoretical structures, such as inclusive fitness theory, Evolutionary Stable Strategy (ESS) theory and bet-hedging theory. The 40-year-old misunderstanding over the meaning of fitness optimization between mathematicians and biologists is explained. Most of the elements required for a general theory have now been implemented, but not together in the same framework, and 'general time' remains to be developed and integrated with the other elements to produce a final unified theory of neo-Darwinian natural selection.

  3. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Tradeoff on Phenotype Robustness in Biological Networks Part II: Ecological Networks

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    In ecological networks, network robustness should be large enough to confer intrinsic robustness for tolerating intrinsic parameter fluctuations, as well as environmental robustness for resisting environmental disturbances, so that the phenotype stability of ecological networks can be maintained, thus guaranteeing phenotype robustness. However, it is difficult to analyze the network robustness of ecological systems because they are complex nonlinear partial differential stochastic systems. This paper develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance sensitivity in ecological networks. We found that the phenotype robustness criterion for ecological networks is that if intrinsic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations and environmental disturbances. These results in robust ecological networks are similar to that in robust gene regulatory networks and evolutionary networks even they have different spatial-time scales. PMID:23515112

  4. Intelligent control of a planning system for astronaut training.

    PubMed

    Ortiz, J; Chen, G

    1999-07-01

    This work intends to design, analyze and solve, from the systems control perspective, a complex, dynamic, and multiconstrained planning system for generating training plans for crew members of the NASA-led International Space Station. Various intelligent planning systems have been developed within the framework of artificial intelligence. These planning systems generally lack a rigorous mathematical formalism to allow a reliable and flexible methodology for their design, modeling, and performance analysis in a dynamical, time-critical, and multiconstrained environment. Formulating the planning problem in the domain of discrete-event systems under a unified framework such that it can be modeled, designed, and analyzed as a control system will provide a self-contained theory for such planning systems. This will also provide a means to certify various planning systems for operations in the dynamical and complex environments in space. The work presented here completes the design, development, and analysis of an intricate, large-scale, and representative mathematical formulation for intelligent control of a real planning system for Space Station crew training. This planning system has been tested and used at NASA-Johnson Space Center.

  5. Electro-magneto interaction in fractional Green-Naghdi thermoelastic solid with a cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Ezzat, M. A.; El-Bary, A. A.

    2018-01-01

    A unified mathematical model of Green-Naghdi's thermoelasticty theories (GN), based on fractional time-derivative of heat transfer is constructed. The model is applied to solve a one-dimensional problem of a perfect conducting unbounded body with a cylindrical cavity subjected to sinusoidal pulse heating in the presence of an axial uniform magnetic field. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Comparisons are made with the results predicted by the two theories. The effects of the fractional derivative parameter on thermoelastic fields for different theories are discussed.

  6. Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable. PMID:25369026

  7. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  8. Real-time individualization of the unified model of performance.

    PubMed

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  9. On the Wind Generation of Water Waves

    NASA Astrophysics Data System (ADS)

    Bühler, Oliver; Shatah, Jalal; Walsh, Samuel; Zeng, Chongchun

    2016-11-01

    In this work, we consider the mathematical theory of wind generated water waves. This entails determining the stability properties of the family of laminar flow solutions to the two-phase interface Euler equation. We present a rigorous derivation of the linearized evolution equations about an arbitrary steady solution, and, using this, we give a complete proof of the instability criterion of M iles [16]. Our analysis is valid even in the presence of surface tension and a vortex sheet (discontinuity in the tangential velocity across the air-sea interface). We are thus able to give a unified equation connecting the Kelvin-Helmholtz and quasi-laminar models of wave generation.

  10. Towards a Unified Theory of Engineering Education

    ERIC Educational Resources Information Center

    Salcedo Orozco, Oscar H.

    2017-01-01

    STEM education is an interdisciplinary approach to learning where rigorous academic concepts are coupled with real-world lessons and activities as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling STEM literacy (Tsupros, Kohler and…

  11. Biodiversity patterns along ecological gradients: unifying β-diversity indices.

    PubMed

    Szava-Kovats, Robert C; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients.

  12. Biodiversity Patterns along Ecological Gradients: Unifying β-Diversity Indices

    PubMed Central

    Szava-Kovats, Robert C.; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients. PMID:25330181

  13. Statistical Teleodynamics: Toward a Theory of Emergence.

    PubMed

    Venkatasubramanian, Venkat

    2017-10-24

    The central scientific challenge of the 21st century is developing a mathematical theory of emergence that can explain and predict phenomena such as consciousness and self-awareness. The most successful research program of the 20th century, reductionism, which goes from the whole to parts, seems unable to address this challenge. This is because addressing this challenge inherently requires an opposite approach, going from parts to the whole. In addition, reductionism, by the very nature of its inquiry, typically does not concern itself with teleology or purposeful behavior. Modeling emergence, in contrast, requires the addressing of teleology. Together, these two requirements present a formidable challenge in developing a successful mathematical theory of emergence. In this article, I describe a new theory of emergence, called statistical teleodynamics, that addresses certain aspects of the general problem. Statistical teleodynamics is a mathematical framework that unifies three seemingly disparate domains-purpose-free entities in statistical mechanics, human engineered teleological systems in systems engineering, and nature-evolved teleological systems in biology and sociology-within the same conceptual formalism. This theory rests on several key conceptual insights, the most important one being the recognition that entropy mathematically models the concept of fairness in economics and philosophy and, equivalently, the concept of robustness in systems engineering. These insights help prove that the fairest inequality of income is a log-normal distribution, which will emerge naturally at equilibrium in an ideal free market society. Similarly, the theory predicts the emergence of the three classes of network organization-exponential, scale-free, and Poisson-seen widely in a variety of domains. Statistical teleodynamics is the natural generalization of statistical thermodynamics, the most successful parts-to-whole systems theory to date, but this generalization is only a modest step toward a more comprehensive mathematical theory of emergence.

  14. Actuality of transcendental æsthetics for modern physics

    NASA Astrophysics Data System (ADS)

    Petitot, Jean

    1. The more mathematics and physics unify themselves in the physico-mathematical modern theories, the more an objective epistemology becomes necessary. Only such a transcendental epistemology is able to thematize correctly the status of the mathematical determination of physical reality. 2. There exists a transcendental history of the synthetic a priori and of the construction of physical categories. 3. The transcendental approach allows to supersed Wittgenstein's and Carnap's antiplatonist thesis according to which pure mathematics are physically applicable only if they lack any descriptive, cognitive or objective, content and reduce to mere prescriptive and normative devices. In fact, pure mathematics are prescriptive-normative in physics because: (i) the categories of physical objectivity are prescriptive-normative, and (ii) their categorial content is mathematically “constructed” through a Transcendental Aesthetics. Only a transcendental approach make compatible, in the one hand, a grammatical conventionalism of Wittgensteinian or Carnapian type and, on the other hand, a platonist realism of Gödelian type. Mathematics are not a grammar of the world but a mathematical hermeneutics of the intuitive forms and of the categorial grammar of the world.

  15. Separation of Variables and Superintegrability; The symmetry of solvable systems

    NASA Astrophysics Data System (ADS)

    Kalnins, Ernest G.; Kress, Jonathan M.; Miller, Willard, Jr.

    2018-06-01

    Separation of variables methods for solving partial differential equations are of immense theoretical and practical importance in mathematical physics. They are the most powerful tool known for obtaining explicit solutions of the partial differential equations of mathematical physics. The purpose of this book is to give an up-to-date presentation of the theory of separation of variables and its relation to superintegrability. Collating and presenting it in a unified, updated and a more accessible manner, the results scattered in the literature that the authors have prepared is an invaluable resource for mathematicians and mathematical physicists in particular, as well as science, engineering, geological and biological researchers interested in explicit solutions.

  16. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  17. FORUM: The Algorithmic Way of Life is Best and Responses.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.; And Others

    1985-01-01

    The forum is focused on thinking about and with algorithms as a way of unifying all one's mathematical endeavors. The lead article by Maurer presents examples and discussion of this point. Responses, often disagreeing with his views, are by Douglas, Korte, Hilton, Renz, Smorynski, Hammersley, and Halmos. (MNS)

  18. The Functionator 3000: Transforming Numbers and Children

    ERIC Educational Resources Information Center

    Fisher, Elaine Cerrato; Roy, George; Reeves, Charles

    2013-01-01

    Mrs. Fisher's class was learning about arithmetic functions by pretending to operate real-world "function machines" (Reeves 2006). Functions are a unifying mathematics topic, and a great deal of emphasis is placed on understanding them in prekindergarten through grade 12 (Kilpatrick and Izsák 2008). In its Algebra Content Standard, the…

  19. It Works: Project R-3, San Jose, California.

    ERIC Educational Resources Information Center

    American Institutes for Research in the Behavioral Sciences, Palo Alto, CA.

    A project was designed by the San Jose Unified School District and the education division of the Lockheed Missiles and Space Company to treat learning problems experienced by eighth and ninth grade students with underdeveloped reading and mathematics skills. The students were largely Mexican American and were from predominately disadvantaged…

  20. Science, Math, and Technology. K-6 Science Curriculum.

    ERIC Educational Resources Information Center

    Blueford, J. R.; And Others

    Science, Math and Technology is one of the units of a K-6 unified science curriculum program. The unit consists of four organizing sub-themes: (1) science (with activities on observation, comparisons, and the scientific method); (2) technology (examining simple machines, electricity, magnetism, waves and forces); (3) mathematics (addressing skill…

  1. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  2. Mathematical correlation of modal parameter identification methods via system realization theory

    NASA Technical Reports Server (NTRS)

    Juang, J. N.

    1986-01-01

    A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.

  3. A Common Mechanism Underlying Food Choice and Social Decisions.

    PubMed

    Krajbich, Ian; Hare, Todd; Bartling, Björn; Morishima, Yosuke; Fehr, Ernst

    2015-10-01

    People make numerous decisions every day including perceptual decisions such as walking through a crowd, decisions over primary rewards such as what to eat, and social decisions that require balancing own and others' benefits. The unifying principles behind choices in various domains are, however, still not well understood. Mathematical models that describe choice behavior in specific contexts have provided important insights into the computations that may underlie decision making in the brain. However, a critical and largely unanswered question is whether these models generalize from one choice context to another. Here we show that a model adapted from the perceptual decision-making domain and estimated on choices over food rewards accurately predicts choices and reaction times in four independent sets of subjects making social decisions. The robustness of the model across domains provides behavioral evidence for a common decision-making process in perceptual, primary reward, and social decision making.

  4. A Common Mechanism Underlying Food Choice and Social Decisions

    PubMed Central

    Krajbich, Ian; Hare, Todd; Bartling, Björn; Morishima, Yosuke; Fehr, Ernst

    2015-01-01

    People make numerous decisions every day including perceptual decisions such as walking through a crowd, decisions over primary rewards such as what to eat, and social decisions that require balancing own and others’ benefits. The unifying principles behind choices in various domains are, however, still not well understood. Mathematical models that describe choice behavior in specific contexts have provided important insights into the computations that may underlie decision making in the brain. However, a critical and largely unanswered question is whether these models generalize from one choice context to another. Here we show that a model adapted from the perceptual decision-making domain and estimated on choices over food rewards accurately predicts choices and reaction times in four independent sets of subjects making social decisions. The robustness of the model across domains provides behavioral evidence for a common decision-making process in perceptual, primary reward, and social decision making. PMID:26460812

  5. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Toward a unified model of passive drug permeation II: the physiochemical determinants of unbound tissue distribution with applications to the design of hepatoselective glucokinase activators.

    PubMed

    Ghosh, Avijit; Maurer, Tristan S; Litchfield, John; Varma, Manthema V; Rotter, Charles; Scialis, Renato; Feng, Bo; Tu, Meihua; Guimaraes, Cris R W; Scott, Dennis O

    2014-10-01

    In this work, we leverage a mathematical model of the underlying physiochemical properties of tissues and physicochemical properties of molecules to support the development of hepatoselective glucokinase activators. Passive distribution is modeled via a Fick-Nernst-Planck approach, using in vitro experimental data to estimate the permeability of both ionized and neutral species. The model accounts for pH and electrochemical potential across cellular membranes, ionization according to Henderson-Hasselbalch, passive permeation of the neutral species using Fick's law, and passive permeation of the ionized species using the Nernst-Planck equation. The mathematical model of the physiochemical system allows derivation of a single set of parameters governing the distribution of drug molecules across multiple conditions both in vitro and in vivo. A case study using this approach in the development of hepatoselective glucokinase activators via organic anion-transporting polypeptide-mediated hepatic uptake and impaired passive distribution to the pancreas is described. The results for these molecules indicate the permeability penalty of the ionized form is offset by its relative abundance, leading to passive pancreatic exclusion according to the Nernst-Planck extension of Fickian passive permeation. Generally, this model serves as a useful construct for drug discovery scientists to understand subcellular exposure of acids or bases using specific physiochemical properties. Copyright © 2014 by The American Society for Pharmacology and Experimental Therapeutics.

  7. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  8. Unified path integral approach to theories of diffusion-influenced reactions

    NASA Astrophysics Data System (ADS)

    Prüstel, Thorsten; Meier-Schellersheim, Martin

    2017-08-01

    Building on mathematical similarities between quantum mechanics and theories of diffusion-influenced reactions, we develop a general approach for computational modeling of diffusion-influenced reactions that is capable of capturing not only the classical Smoluchowski picture but also alternative theories, as is here exemplified by a volume reactivity model. In particular, we prove the path decomposition expansion of various Green's functions describing the irreversible and reversible reaction of an isolated pair of molecules. To this end, we exploit a connection between boundary value and interaction potential problems with δ - and δ'-function perturbation. We employ a known path-integral-based summation of a perturbation series to derive a number of exact identities relating propagators and survival probabilities satisfying different boundary conditions in a unified and systematic manner. Furthermore, we show how the path decomposition expansion represents the propagator as a product of three factors in the Laplace domain that correspond to quantities figuring prominently in stochastic spatially resolved simulation algorithms. This analysis will thus be useful for the interpretation of current and the design of future algorithms. Finally, we discuss the relation between the general approach and the theory of Brownian functionals and calculate the mean residence time for the case of irreversible and reversible reactions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviationsmore » of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.« less

  10. Survey of meshless and generalized finite element methods: A unified approach

    NASA Astrophysics Data System (ADS)

    Babuška, Ivo; Banerjee, Uday; Osborn, John E.

    In the past few years meshless methods for numerically solving partial differential equations have come into the focus of interest, especially in the engineering community. This class of methods was essentially stimulated by difficulties related to mesh generation. Mesh generation is delicate in many situations, for instance, when the domain has complicated geometry; when the mesh changes with time, as in crack propagation, and remeshing is required at each time step; when a Lagrangian formulation is employed, especially with nonlinear PDEs. In addition, the need for flexibility in the selection of approximating functions (e.g., the flexibility to use non-polynomial approximating functions), has played a significant role in the development of meshless methods. There are many recent papers, and two books, on meshless methods; most of them are of an engineering character, without any mathematical analysis.In this paper we address meshless methods and the closely related generalized finite element methods for solving linear elliptic equations, using variational principles. We give a unified mathematical theory with proofs, briefly address implementational aspects, present illustrative numerical examples, and provide a list of references to the current literature.The aim of the paper is to provide a survey of a part of this new field, with emphasis on mathematics. We present proofs of essential theorems because we feel these proofs are essential for the understanding of the mathematical aspects of meshless methods, which has approximation theory as a major ingredient. As always, any new field is stimulated by and related to older ideas. This will be visible in our paper.

  11. Cognitively Guided Instruction: An Implementation Case Study of a High Performing School District

    ERIC Educational Resources Information Center

    Dowdy, William D. B.

    2011-01-01

    No Child Left Behind legislation developed goals for every student to be proficient in each academic subject by 2014. California's students are far from meeting this goal, especially in mathematics. One Southern Californian school district, renamed Green Valley Unified School District for anonymity, began using Cognitively Guided Instruction…

  12. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  13. University Students' Conceptual Knowledge of Randomness and Probability in the Contexts of Evolution and Mathematics

    ERIC Educational Resources Information Center

    Fiedler, Daniela; Tröbst, Steffen; Harms, Ute

    2017-01-01

    Students of all ages face severe conceptual difficulties regarding key aspects of evolution-- the central, unifying, and overarching theme in biology. Aspects strongly related to abstract "threshold" concepts like randomness and probability appear to pose particular difficulties. A further problem is the lack of an appropriate instrument…

  14. Superalgebra and fermion-boson symmetry

    PubMed Central

    Miyazawa, Hironari

    2010-01-01

    Fermions and bosons are quite different kinds of particles, but it is possible to unify them in a supermultiplet, by introducing a new mathematical scheme called superalgebra. In this article we discuss the development of the concept of symmetry, starting from the rotational symmetry and finally arriving at this fermion-boson (FB) symmetry. PMID:20228617

  15. Unifying the Algebra for All Movement

    ERIC Educational Resources Information Center

    Eddy, Colleen M.; Quebec Fuentes, Sarah; Ward, Elizabeth K.; Parker, Yolanda A.; Cooper, Sandi; Jasper, William A.; Mallam, Winifred A.; Sorto, M. Alejandra; Wilkerson, Trena L.

    2015-01-01

    There exists an increased focus on school mathematics, especially first-year algebra, due to recent efforts for all students to be college and career ready. In addition, there are calls, policies, and legislation advocating for all students to study algebra epitomized by four rationales of the "Algebra for All" movement. In light of this…

  16. Biology. USMES Beginning "How To" Set.

    ERIC Educational Resources Information Center

    Agro, Sally; And Others

    In this set of two booklets for primary grades, students learn how to make a home for their animals (amphibians, insects, fish, crayfish) and a home for their rodents (hamsters, guinea pigs, gerbils, mice). The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on open-ended, long-range investigations of…

  17. Design Lab. USMES "How To" Series.

    ERIC Educational Resources Information Center

    Donahoe, Charles; And Others

    The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on open-ended, long-range investigations of real problems. Since children often design and build things in USMES, 26 "Design Lab" cards provide information on the safe use and simple maintenance of tools. Each card has a large photograph of…

  18. Equilibrium econophysics: A unified formalism for neoclassical economics and equilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Sousa, Tânia; Domingos, Tiago

    2006-11-01

    We develop a unified conceptual and mathematical structure for equilibrium econophysics, i.e., the use of concepts and tools of equilibrium thermodynamics in neoclassical microeconomics and vice versa. Within this conceptual structure the results obtained in microeconomic theory are: (1) the definition of irreversibility in economic behavior; (2) the clarification that the Engel curve and the offer curve are not descriptions of real processes dictated by the maximization of utility at constant endowment; (3) the derivation of a relation between elasticities proving that economic elasticities are not all independent; (4) the proof that Giffen goods do not exist in a stable equilibrium; (5) the derivation that ‘economic integrability’ is equivalent to the generalized Le Chatelier principle and (6) the definition of a first order phase transition, i.e., a transition between separate points in the utility function. In thermodynamics the results obtained are: (1) a relation between the non-dimensional isothermal and adiabatic compressibilities and the increase or decrease in the thermodynamic potentials; (2) the distinction between mathematical integrability and optimization behavior and (3) the generalization of the Clapeyron equation.

  19. Theory of Remote Image Formation

    NASA Astrophysics Data System (ADS)

    Blahut, Richard E.

    2004-11-01

    In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems

  20. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    NASA Astrophysics Data System (ADS)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  1. On the fine-structure constant in a plasma model of the fluctuating vacuum substratum

    NASA Technical Reports Server (NTRS)

    Cragin, B. L.

    1986-01-01

    The existence of an intimate connection between the quivering motion of electrons and positrons (Zitterbewegung), predicted by the Dirac equation, and the zero-point fluctuations of the vacuum is suggested. The nature of the proposed connection is discussed quantitatively, and an approximate self-consistency relation is derived, supplying a purely mathematical expression that relates the dimensionless coupling strengths (fine-structure constants) alpha sub e and alpha sub g of electromagnetism and gravity. These considerations provide a tentative explanation for the heretofore puzzling number 1/alpha sub e of about 137.036 and suggest that attempts to unify gravity with the electroweak and strong interactions will ultimately prove successful.

  2. Control and optimization in the modeling of fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Sivasundaram, Seenith

    2016-07-01

    The review paper [1] is devoted to the survey of different structures that have been developed for the modeling and analysis of various types of fibrosis. Biomathematics, bioinformatics, biomechanics and biophysics modeling have been treated by means of a brief description of the different models developed. The review is impressive and clearly written, addressed to a reader interested not only in the theoretical modeling but also in the biological description. The models have been described without recurring to technical statements or mathematical equations thus allowing the non-specialist reader to understand what framework is more suitable at a certain observation scale. The review [1] concludes with the possibility to develop a multiscale approach considering also the definition of a therapeutical strategy for pathological fibrosis. In particular the control and optimization of therapeutics action is an important issue and this article aims at commenting on this topic.

  3. Historical perspective on lead biokinetic models.

    PubMed Central

    Rabinowitz, M

    1998-01-01

    A historical review of the development of biokinetic model of lead is presented. Biokinetics is interpreted narrowly to mean only physiologic processes happening within the body. Proceeding chronologically, for each epoch, the measurements of lead in the body are presented along with mathematical models in an attempt to trace the convergence of observations from two disparate fields--occupational medicine and radiologic health--into some unified models. Kehoe's early balance studies and the use of radioactive lead tracers are presented. The 1960s saw the joint application of radioactive lead techniques and simple compartmental kinetic models used to establish the exchange rates and residence times of lead in body pools. The applications of stable isotopes to questions of the magnitudes of respired and ingested inputs required the development of a simple three-pool model. During the 1980s more elaborate models were developed. One of their key goals was the establishment of the dose-response relationship between exposure to lead and biologic precursors of adverse health effects. PMID:9860905

  4. State-transition diagrams for biologists.

    PubMed

    Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique

    2012-01-01

    It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines.

  5. State-Transition Diagrams for Biologists

    PubMed Central

    Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique

    2012-01-01

    It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines. PMID:22844438

  6. Bridging the Vector Calculus Gap

    NASA Astrophysics Data System (ADS)

    Dray, Tevian; Manogue, Corinne

    2003-05-01

    As with Britain and America, mathematicians and physicists are separated from each other by a common language. In a nutshell, mathematics is about functions, but physics is about things. For the last several years, we have led an NSF-supported effort to "bridge the vector calculus gap" between mathematics and physics. The unifying theme we have discovered is to emphasize geometric reasoning, not (just) algebraic computation. In this talk, we will illustrate the language differences between mathematicians and physicists, and how we are trying reconcile them in the classroom. For further information about the project go to: http://www.physics.orst.edu/bridge

  7. Stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobczyk, K.

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less

  8. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare.

    PubMed

    Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung

    2015-07-02

    A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a "data modeler" tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  9. Evaluating four mathematical models for nitrous oxide production by autotrophic ammonia-oxidizing bacteria.

    PubMed

    Ni, Bing-Jie; Yuan, Zhiguo; Chandran, Kartik; Vanrolleghem, Peter A; Murthy, Sudhir

    2013-01-01

    There is increasing evidence showing that ammonia-oxidizing bacteria (AOB) are major contributors to N(2)O emissions from wastewater treatment plants (WWTPs). Although the fundamental metabolic pathways for N(2)O production by AOB are now coming to light, the mechanisms responsible for N(2)O production by AOB in WWTP are not fully understood. Mathematical modeling provides a means for testing hypotheses related to mechanisms and triggers for N(2)O emissions in WWTP, and can then also become a tool to support the development of mitigation strategies. This study examined the ability of four mathematical model structures to describe two distinct mechanisms of N(2)O production by AOB. The production mechanisms evaluated are (1) N(2)O as the final product of nitrifier denitrification with NO(2)- as the terminal electron acceptor and (2) N(2)O as a byproduct of incomplete oxidation of hydroxylamine (NH(2)OH) to NO(2)-. The four models were compared based on their ability to predict N(2)O dynamics observed in three mixed culture studies. Short-term batch experimental data were employed to examine model assumptions related to the effects of (1) NH4+ concentration variations, (2) dissolved oxygen (DO) variations, (3) NO(2)- accumulations and (4) NH(2OH as an externally provided substrate. The modeling results demonstrate that all these models can generally describe the NH4+, NO(2)-, and NO(3)- data. However, none of these models were able to reproduce all measured N(2)O data. The results suggest that both the denitrification and NH(2)OH pathways may be involved in N(2)O production and could be kinetically linked by a competition for intracellular reducing equivalents. A unified model capturing both mechanisms and their potential interactions needs to be developed with consideration of physiological complexity. Copyright © 2012 Wiley Periodicals, Inc.

  10. Probabilistic delay differential equation modeling of event-related potentials.

    PubMed

    Ostwald, Dirk; Starke, Ludger

    2016-08-01

    "Dynamic causal models" (DCMs) are a promising approach in the analysis of functional neuroimaging data due to their biophysical interpretability and their consolidation of functional-segregative and functional-integrative propositions. In this theoretical note we are concerned with the DCM framework for electroencephalographically recorded event-related potentials (ERP-DCM). Intuitively, ERP-DCM combines deterministic dynamical neural mass models with dipole-based EEG forward models to describe the event-related scalp potential time-series over the entire electrode space. Since its inception, ERP-DCM has been successfully employed to capture the neural underpinnings of a wide range of neurocognitive phenomena. However, in spite of its empirical popularity, the technical literature on ERP-DCM remains somewhat patchy. A number of previous communications have detailed certain aspects of the approach, but no unified and coherent documentation exists. With this technical note, we aim to close this gap and to increase the technical accessibility of ERP-DCM. Specifically, this note makes the following novel contributions: firstly, we provide a unified and coherent review of the mathematical machinery of the latent and forward models constituting ERP-DCM by formulating the approach as a probabilistic latent delay differential equation model. Secondly, we emphasize the probabilistic nature of the model and its variational Bayesian inversion scheme by explicitly deriving the variational free energy function in terms of both the likelihood expectation and variance parameters. Thirdly, we detail and validate the estimation of the model with a special focus on the explicit form of the variational free energy function and introduce a conventional nonlinear optimization scheme for its maximization. Finally, we identify and discuss a number of computational issues which may be addressed in the future development of the approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. String Theory: Big Problem for Small Size

    ERIC Educational Resources Information Center

    Sahoo, S.

    2009-01-01

    String theory is the most promising candidate theory for a unified description of all the fundamental forces that exist in nature. It provides a mathematical framework that combines quantum theory with Einstein's general theory of relativity. The typical size of a string is of the order of 10[superscript -33] cm, called the Planck length. But due…

  12. Descriptive Geometry in Educational Process of Technical University in Russia Today

    ERIC Educational Resources Information Center

    Voronina, Marianna V.; Tretyakova, Zlata O.; Moroz, Olga N.; Folomkin, Andrey I.

    2016-01-01

    The relevance of the investigated problem is caused by the need for monitoring the impact of the Unified State Examination (USE) on the level of mathematical culture and the level of geometric literacy of applicants and students of modern engineering universities of Russia. The need to determine the position of Descriptive Geometry in the…

  13. The Metaplectic Sampling of Quantum Engineering

    NASA Astrophysics Data System (ADS)

    Schempp, Walter J.

    2010-12-01

    Due to photonic visualization, quantum physics is not restricted to the microworld. Starting off with synthetic aperture radar, the paper provides a unified approach to coherent atom optics, clinical magnetic resonance tomography and the bacterial protein dynamics of structural microbiology. Its mathematical base is harmonic analysis on the three-dimensional Heisenberg Lie group with associated nilpotent Heisenberg algebra Lie(N).

  14. Arithmetic and Algebra in the Schools: Recommendations for a Return to Reality.

    ERIC Educational Resources Information Center

    Ailles, Douglas S.; And Others

    The aim of this report is to suggest aspects of mathematics education that should be incorporated into curricula rather than to outline specific courses of study. General recommendations are made regarding curriculum, instructional methods, and textbooks. The suggestion that graphs and relations to be used as a unifying theme is followed by…

  15. Physics through the 1990s: Nuclear physics

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume begins with a non-mathematical introduction to nuclear physics. A description of the major advances in the field follows, with chapters on nuclear structure and dynamics, fundamental forces in the nucleus, and nuclei under extreme conditions of temperature, density, and spin. Impacts of nuclear physics on astrophysics and the scientific and societal benefits of nuclear physics are then discussed. Another section deals with scientific frontiers, describing research into the realm of the quark-gluon plasma; the changing description of nuclear matter, specifically the use of the quark model; and the implications of the standard model and grand unified theories of elementary-particle physics; and finishes with recommendations and priorities for nuclear physics research facilities, instrumentation, accelerators, theory, education, and data bases. Appended are a list of national accelerator facilities, a list of reviewers, a bibliography, and a glossary.

  16. Theory of the Origin, Evolution, and Nature of Life

    PubMed Central

    Andrulis, Erik D.

    2011-01-01

    Life is an inordinately complex unsolved puzzle. Despite significant theoretical progress, experimental anomalies, paradoxes, and enigmas have revealed paradigmatic limitations. Thus, the advancement of scientific understanding requires new models that resolve fundamental problems. Here, I present a theoretical framework that economically fits evidence accumulated from examinations of life. This theory is based upon a straightforward and non-mathematical core model and proposes unique yet empirically consistent explanations for major phenomena including, but not limited to, quantum gravity, phase transitions of water, why living systems are predominantly CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), homochirality of sugars and amino acids, homeoviscous adaptation, triplet code, and DNA mutations. The theoretical framework unifies the macrocosmic and microcosmic realms, validates predicted laws of nature, and solves the puzzle of the origin and evolution of cellular life in the universe. PMID:25382118

  17. SIMO optical wireless links with nonzero boresight pointing errors over M modeled turbulence channels

    NASA Astrophysics Data System (ADS)

    Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.

    2017-11-01

    Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.

  18. A general theory of kinetics and thermodynamics of steady-state copolymerization.

    PubMed

    Shu, Yao-Gen; Song, Yong-Shun; Ou-Yang, Zhong-Can; Li, Ming

    2015-06-17

    Kinetics of steady-state copolymerization has been investigated since the 1940s. Irreversible terminal and penultimate models were successfully applied to a number of comonomer systems, but failed for systems where depropagation is significant. Although a general mathematical treatment of the terminal model with depropagation was established in the 1980s, a penultimate model and higher-order terminal models with depropagation have not been systematically studied, since depropagation leads to hierarchically-coupled and unclosed kinetic equations which are hard to solve analytically. In this work, we propose a truncation method to solve the steady-state kinetic equations of any-order terminal models with depropagation in a unified way, by reducing them into closed steady-state equations which give the exact solution of the original kinetic equations. Based on the steady-state equations, we also derive a general thermodynamic equality in which the Shannon entropy of the copolymer sequence is explicitly introduced as part of the free energy dissipation of the whole copolymerization system.

  19. An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.

    ERIC Educational Resources Information Center

    Jiang, Hai; And Others

    L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…

  20. A Unified Approach to Modeling Multidisciplinary Interactions

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Bhatia, Kumar G.

    2000-01-01

    There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.

  1. Cartan gravity, matter fields, and the gauge principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westman, Hans F., E-mail: hwestman74@gmail.com; Zlosnik, Tom G., E-mail: t.zlosnik@imperial.ac.uk

    Gravity is commonly thought of as one of the four force fields in nature. However, in standard formulations its mathematical structure is rather different from the Yang–Mills fields of particle physics that govern the electromagnetic, weak, and strong interactions. This paper explores this dissonance with particular focus on how gravity couples to matter from the perspective of the Cartan-geometric formulation of gravity. There the gravitational field is represented by a pair of variables: (1) a ‘contact vector’ V{sup A} which is geometrically visualized as the contact point between the spacetime manifold and a model spacetime being ‘rolled’ on top ofmore » it, and (2) a gauge connection A{sub μ}{sup AB}, here taken to be valued in the Lie algebra of SO(2,3) or SO(1,4), which mathematically determines how much the model spacetime is rotated when rolled. By insisting on two principles, the gauge principle and polynomial simplicity, we shall show how one can reformulate matter field actions in a way that is harmonious with Cartan’s geometric construction. This yields a formulation of all matter fields in terms of first order partial differential equations. We show in detail how the standard second order formulation can be recovered. In particular, the Hodge dual, which characterizes the structure of bosonic field equations, pops up automatically. Furthermore, the energy–momentum and spin-density three-forms are naturally combined into a single object here denoted the spin-energy–momentum three-form. Finally, we highlight a peculiarity in the mathematical structure of our first-order formulation of Yang–Mills fields. This suggests a way to unify a U(1) gauge field with gravity into a SO(1,5)-valued gauge field using a natural generalization of Cartan geometry in which the larger symmetry group is spontaneously broken down to SO(1,3)×U(1). The coupling of this unified theory to matter fields and possible extensions to non-Abelian gauge fields are left as open questions. -- Highlights: •Develops Cartan gravity to include matter fields. •Coupling to gravity is done using the standard gauge prescription. •Matter actions are manifestly polynomial in all field variables. •Standard equations recovered on-shell for scalar, spinor and Yang–Mills fields. •Unification of a U(1) field with gravity based on the orthogonal group SO(1,5)« less

  2. The Development of Cadastral Domain Model Oriented at Unified Real Estate Registration of China Based on Ontology

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhu, X.; Shen, C.; Chen, D.; Guo, W.

    2012-07-01

    With the certain regulation of unified real estate registration taken by the Property Law and the step-by-step advance of simultaneous development in urban and rural in China, it is the premise and foundation to clearly specify property rights and their relations in promoting the integrated management of urban and rural land. This paper aims at developing a cadastral domain model oriented at unified real estate registration of China from the perspective of legal and spatial, which set up the foundation for unified real estate registration, and facilitates the effective interchange of cadastral information and the administration of land use. The legal cadastral model is provided based on the analysis of gap between current model and the demand of unified real estate registration, which implies the restrictions between different rights. Then the new cadastral domain model is constructed based on the legal cadastral domain model and CCDM (van Oosterom et al., 2006), which integrate real estate rights of urban land and rural land. Finally, the model is validated by a prototype system. The results show that the model is applicable for unified real estate registration in China.

  3. A methodology for design of a linear referencing system for surface transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonderohe, A.; Hepworth, T.

    1997-06-01

    The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less

  4. Modeling and simulation of electronic structure, material interface and random doping in nano electronic devices

    PubMed Central

    Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The miniaturization of nano-scale electronic devices, such as metal oxide semiconductor field effect transistors (MOSFETs), has given rise to a pressing demand in the new theoretical understanding and practical tactic for dealing with quantum mechanical effects in integrated circuits. Modeling and simulation of this class of problems have emerged as an important topic in applied and computational mathematics. This work presents mathematical models and computational algorithms for the simulation of nano-scale MOSFETs. We introduce a unified two-scale energy functional to describe the electrons and the continuum electrostatic potential of the nano-electronic device. This framework enables us to put microscopic and macroscopic descriptions in an equal footing at nano scale. By optimization of the energy functional, we derive consistently-coupled Poisson-Kohn-Sham equations. Additionally, layered structures are crucial to the electrostatic and transport properties of nano transistors. A material interface model is proposed for more accurate description of the electrostatics governed by the Poisson equation. Finally, a new individual dopant model that utilizes the Dirac delta function is proposed to understand the random doping effect in nano electronic devices. Two mathematical algorithms, the matched interface and boundary (MIB) method and the Dirichlet-to-Neumann mapping (DNM) technique, are introduced to improve the computational efficiency of nano-device simulations. Electronic structures are computed via subband decomposition and the transport properties, such as the I-V curves and electron density, are evaluated via the non-equilibrium Green's functions (NEGF) formalism. Two distinct device configurations, a double-gate MOSFET and a four-gate MOSFET, are considered in our three-dimensional numerical simulations. For these devices, the current fluctuation and voltage threshold lowering effect induced by the discrete dopant model are explored. Numerical convergence and model well-posedness are also investigated in the present work. PMID:20396650

  5. Multiscale Modeling of Antibody-Drug Conjugates: Connecting Tissue and Cellular Distribution to Whole Animal Pharmacokinetics and Potential Implications for Efficacy.

    PubMed

    Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M

    2016-09-01

    Antibody-drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from nonspecific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody-drug conjugate Kadcyla in HER2-positive mouse xenografts. This model is able to capture the impact of the drug-antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs.

  6. Multiscale Modeling of Antibody Drug Conjugates: Connecting tissue and cellular distribution to whole animal pharmacokinetics and potential implications for efficacy

    PubMed Central

    Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M.

    2016-01-01

    Antibody drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from non-specific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody drug conjugate Kadcyla in HER2 positive mouse xenografts. This model is able to capture the impact of the drug antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs. PMID:27287046

  7. Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely

    2015-12-01

    We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.

  8. A problem of optimal control and observation for distributed homogeneous multi-agent system

    NASA Astrophysics Data System (ADS)

    Kruglikov, Sergey V.

    2017-12-01

    The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.

  9. Using Technology to Unify Geometric Theorems about the Power of a Point

    ERIC Educational Resources Information Center

    Contreras, Jose N.

    2011-01-01

    In this article, I describe a classroom investigation in which a group of prospective secondary mathematics teachers discovered theorems related to the power of a point using "The Geometer's Sketchpad" (GSP). The power of a point is defines as follows: Let "P" be a fixed point coplanar with a circle. If line "PA" is a secant line that intersects…

  10. The Nation's Report Card Mathematics 2013 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 8, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2013

    2013-01-01

    The National Assessment of Educational Progress (NAEP), in partnership with the National Assessment Governing Board and the Council of the Great City Schools (CGCS), created the Trial Urban District Assessment (TUDA) in 2002 to support the improvement of student achievement in the nation's large urban districts. NAEP TUDA results in mathematics…

  11. The Nation's Report Card Mathematics 2011 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 8, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    This one-page report presents overall results, achievement level percentages and average score results, scores at selected percentiles, average scores for district and large cities, results for student groups (school race, gender, and eligibility for National School Lunch Program) in 2011, and score gaps for student groups. In 2011, the average…

  12. The Nation's Report Card Mathematics 2013 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 4, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2013

    2013-01-01

    The National Assessment of Educational Progress (NAEP), in partnership with the National Assessment Governing Board and the Council of the Great City Schools (CGCS), created the Trial Urban District Assessment (TUDA) in 2002 to support the improvement of student achievement in the nation's large urban districts. NAEP TUDA results in mathematics…

  13. The Nation's Report Card Mathematics 2011 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 4, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    This one-page report presents overall results, achievement level percentages and average score results, scores at selected percentiles, average scores for district and large cities, results for student groups (school race, gender, and eligibility for National School Lunch Program) in 2011, and score gaps for student groups. In 2011, the average…

  14. Software for Training in Pre-College Mathematics

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Moebes, Travis A.; VanAlstine, Scot

    2003-01-01

    The Intelligent Math Tutor (IMT) is a computer program for training students in pre-college and college-level mathematics courses, including fundamentals, intermediate algebra, college algebra, and trigonometry. The IMT can be executed on a server computer for access by students via the Internet; alternatively, it can be executed on students computers equipped with compact- disk/read-only-memory (CD-ROM) drives. The IMT provides interactive exercises, assessment, tracking, and an on-line graphing calculator with algebraic-manipulation capabilities. The IMT provides an innovative combination of content, delivery mechanism, and artificial intelligence. Careful organization and presentation of the content make it possible to provide intelligent feedback to the student based on performance on exercises and tests. The tracking and feedback mechanisms are implemented within the capabilities of a commercial off-the-shelf development software tool and are written in the Unified Modeling Language to maximize reuse and minimize development cost. The graphical calculator is a standard feature of most college and pre-college algebra and trigonometry courses. Placing this functionality in a Java applet decreases the cost, provides greater capabilities, and provides an opportunity to integrate the calculator with the lessons.

  15. University Students’ Conceptual Knowledge of Randomness and Probability in the Contexts of Evolution and Mathematics

    PubMed Central

    Fiedler, Daniela; Tröbst, Steffen; Harms, Ute

    2017-01-01

    Students of all ages face severe conceptual difficulties regarding key aspects of evolution—the central, unifying, and overarching theme in biology. Aspects strongly related to abstract “threshold” concepts like randomness and probability appear to pose particular difficulties. A further problem is the lack of an appropriate instrument for assessing students’ conceptual knowledge of randomness and probability in the context of evolution. To address this problem, we have developed two instruments, Randomness and Probability Test in the Context of Evolution (RaProEvo) and Randomness and Probability Test in the Context of Mathematics (RaProMath), that include both multiple-choice and free-response items. The instruments were administered to 140 university students in Germany, then the Rasch partial-credit model was applied to assess them. The results indicate that the instruments generate reliable and valid inferences about students’ conceptual knowledge of randomness and probability in the two contexts (which are separable competencies). Furthermore, RaProEvo detected significant differences in knowledge of randomness and probability, as well as evolutionary theory, between biology majors and preservice biology teachers. PMID:28572180

  16. Unification theory of optimal life histories and linear demographic models in internal stochasticity.

    PubMed

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.

  17. Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity

    PubMed Central

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258

  18. Ray Tracing and Modal Methods for Modeling Radio Propagation in Tunnels With Rough Walls

    PubMed Central

    Zhou, Chenming

    2017-01-01

    At the ultrahigh frequencies common to portable radios, tunnels such as mine entries are often modeled by hollow dielectric waveguides. The roughness condition of the tunnel walls has an influence on radio propagation, and therefore should be taken into account when an accurate power prediction is needed. This paper investigates how wall roughness affects radio propagation in tunnels, and presents a unified ray tracing and modal method for modeling radio propagation in tunnels with rough walls. First, general analytical formulas for modeling the influence of the wall roughness are derived, based on the modal method and the ray tracing method, respectively. Second, the equivalence of the ray tracing and modal methods in the presence of wall roughnesses is mathematically proved, by showing that the ray tracing-based analytical formula can converge to the modal-based formula through the Poisson summation formula. The derivation and findings are verified by simulation results based on ray tracing and modal methods. PMID:28935995

  19. Unified field theory from the classical wave equation: Preliminary application to atomic and nuclear structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Múnera, Héctor A., E-mail: hmunera@hotmail.com; Retired professor, Department of Physics, Universidad Nacional de Colombia, Bogotá, Colombia, South America

    2016-07-07

    It is postulated that there exists a fundamental energy-like fluid, which occupies the flat three-dimensional Euclidean space that contains our universe, and obeys the two basic laws of classical physics: conservation of linear momentum, and conservation of total energy; the fluid is described by the classical wave equation (CWE), which was Schrödinger’s first candidate to develop his quantum theory. Novel solutions for the CWE discovered twenty years ago are nonharmonic, inherently quantized, and universal in the sense of scale invariance, thus leading to quantization at all scales of the universe, from galactic clusters to the sub-quark world, and yielding amore » unified Lorentz-invariant quantum theory ab initio. Quingal solutions are isomorphic under both neo-Galilean and Lorentz transformations, and exhibit nother remarkable property: intrinsic unstability for large values of ℓ (a quantum number), thus limiting the size of each system at a given scale. Unstability and scale-invariance together lead to nested structures observed in our solar system; unstability may explain the small number of rows in the chemical periodic table, and nuclear unstability of nuclides beyond lead and bismuth. Quingal functions lend mathematical basis for Boscovich’s unified force (which is compatible with many pieces of evidence collected over the past century), and also yield a simple geometrical solution for the classical three-body problem, which is a useful model for electronic orbits in simple diatomic molecules. A testable prediction for the helicoidal-type force is suggested.« less

  20. Formal Darwinism, the individual-as-maximizing-agent analogy and bet-hedging

    PubMed Central

    Grafen, A.

    1999-01-01

    The central argument of The origin of species was that mechanical processes (inheritance of features and the differential reproduction they cause) can give rise to the appearance of design. The 'mechanical processes' are now mathematically represented by the dynamic systems of population genetics, and the appearance of design by optimization and game theory in which the individual plays the part of the maximizing agent. Establishing a precise individual-as-maximizing-agent (IMA) analogy for a population-genetics system justifies optimization approaches, and so provides a modern formal representation of the core of Darwinism. It is a hitherto unnoticed implication of recent population-genetics models that, contrary to a decades-long consensus, an IMA analogy can be found in models with stochastic environments (subject to a convexity assumption), in which individuals maximize expected reproductive value. The key is that the total reproductive value of a species must be considered as constant, so therefore reproductive value should always be calculated in relative terms. This result removes a major obstacle from the theoretical challenge to find a unifying framework which establishes the IMA analogy for all of Darwinian biology, including as special cases inclusive fitness, evolutionarily stable strategies, evolutionary life-history theory, age-structured models and sex ratio theory. This would provide a formal, mathematical justification of fruitful and widespread but 'intentional' terms in evolutionary biology, such as 'selfish', 'altruism' and 'conflict'.

  1. Tweedie convergence: a mathematical basis for Taylor's power law, 1/f noise, and multifractality.

    PubMed

    Kendal, Wayne S; Jørgensen, Bent

    2011-12-01

    Plants and animals of a given species tend to cluster within their habitats in accordance with a power function between their mean density and the variance. This relationship, Taylor's power law, has been variously explained by ecologists in terms of animal behavior, interspecies interactions, demographic effects, etc., all without consensus. Taylor's law also manifests within a wide range of other biological and physical processes, sometimes being referred to as fluctuation scaling and attributed to effects of the second law of thermodynamics. 1/f noise refers to power spectra that have an approximately inverse dependence on frequency. Like Taylor's law these spectra manifest from a wide range of biological and physical processes, without general agreement as to cause. One contemporary paradigm for 1/f noise has been based on the physics of self-organized criticality. We show here that Taylor's law (when derived from sequential data using the method of expanding bins) implies 1/f noise, and that both phenomena can be explained by a central limit-like effect that establishes the class of Tweedie exponential dispersion models as foci for this convergence. These Tweedie models are probabilistic models characterized by closure under additive and reproductive convolution as well as under scale transformation, and consequently manifest a variance to mean power function. We provide examples of Taylor's law, 1/f noise, and multifractality within the eigenvalue deviations of the Gaussian unitary and orthogonal ensembles, and show that these deviations conform to the Tweedie compound Poisson distribution. The Tweedie convergence theorem provides a unified mathematical explanation for the origin of Taylor's law and 1/f noise applicable to a wide range of biological, physical, and mathematical processes, as well as to multifractality.

  2. Unraveling dynamics of human physical activity patterns in chronic pain conditions

    NASA Astrophysics Data System (ADS)

    Paraschiv-Ionescu, Anisoara; Buchser, Eric; Aminian, Kamiar

    2013-06-01

    Chronic pain is a complex disabling experience that negatively affects the cognitive, affective and physical functions as well as behavior. Although the interaction between chronic pain and physical functioning is a well-accepted paradigm in clinical research, the understanding of how pain affects individuals' daily life behavior remains a challenging task. Here we develop a methodological framework allowing to objectively document disruptive pain related interferences on real-life physical activity. The results reveal that meaningful information is contained in the temporal dynamics of activity patterns and an analytical model based on the theory of bivariate point processes can be used to describe physical activity behavior. The model parameters capture the dynamic interdependence between periods and events and determine a `signature' of activity pattern. The study is likely to contribute to the clinical understanding of complex pain/disease-related behaviors and establish a unified mathematical framework to quantify the complex dynamics of various human activities.

  3. War-gaming application for future space systems acquisition: MATLAB implementation of war-gaming acquisition models and simulation results

    NASA Astrophysics Data System (ADS)

    Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.

    2017-05-01

    The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.

  4. Secure Computer System: Unified Exposition and Multics Interpretation

    DTIC Science & Technology

    1976-03-01

    prearranged code to semaphore critical information to an undercleared subject/process. Neither of these topics is directly addressed by the mathematical...FURTHER CONSIDERATIONS. RULES OF OPERATION FOR A SECURE MULTICS Kernel primitives for a secure Multics will be derived from a higher level user...the Multics architecture as little as possible; this will account to a large extent for radical differences in form between actual kernel primitives

  5. A Characterization of a Unified Notion of Mathematical Function: The Case of High School Function and Linear Transformation

    ERIC Educational Resources Information Center

    Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris

    2017-01-01

    As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…

  6. Frontiers in Human Information Processing Conference

    DTIC Science & Technology

    2008-02-25

    Frontiers in Human Information Processing - Vision, Attention , Memory , and Applications: A Tribute to George Sperling, a Festschrift. We are grateful...with focus on the formal, computational, and mathematical approaches that unify the areas of vision, attention , and memory . The conference also...Information Processing Conference Final Report AFOSR GRANT # FA9550-07-1-0346 The AFOSR Grant # FA9550-07-1-0346 provided partial support for the Conference

  7. Some Issues about the Introduction of First Concepts in Linear Algebra during Tutorial Sessions at the Beginning of University

    ERIC Educational Resources Information Center

    Grenier-Boley, Nicolas

    2014-01-01

    Certain mathematical concepts were not introduced to solve a specific open problem but rather to solve different problems with the same tools in an economic formal way or to unify several approaches: such concepts, as some of those of linear algebra, are presumably difficult to introduce to students as they are potentially interwoven with many…

  8. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare

    PubMed Central

    Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung

    2015-01-01

    A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets. PMID:26147731

  9. Deformation Theory and Physics Model Building

    NASA Astrophysics Data System (ADS)

    Sternheimer, Daniel

    2006-08-01

    The mathematical theory of deformations has proved to be a powerful tool in modeling physical reality. We start with a short historical and philosophical review of the context and concentrate this rapid presentation on a few interrelated directions where deformation theory is essential in bringing a new framework - which has then to be developed using adapted tools, some of which come from the deformation aspect. Minkowskian space-time can be deformed into Anti de Sitter, where massless particles become composite (also dynamically): this opens new perspectives in particle physics, at least at the electroweak level, including prediction of new mesons. Nonlinear group representations and covariant field equations, coming from interactions, can be viewed as some deformation of their linear (free) part: recognizing this fact can provide a good framework for treating problems in this area, in particular global solutions. Last but not least, (algebras associated with) classical mechanics (and field theory) on a Poisson phase space can be deformed to (algebras associated with) quantum mechanics (and quantum field theory). That is now a frontier domain in mathematics and theoretical physics called deformation quantization, with multiple ramifications, avatars and connections in both mathematics and physics. These include representation theory, quantum groups (when considering Hopf algebras instead of associative or Lie algebras), noncommutative geometry and manifolds, algebraic geometry, number theory, and of course what is regrouped under the name of M-theory. We shall here look at these from the unifying point of view of deformation theory and refer to a limited number of papers as a starting point for further study.

  10. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  11. A unified approach for determining the ultimate strength of RC members subjected to combined axial force, bending, shear and torsion

    PubMed Central

    Huang, Zhen

    2017-01-01

    This paper uses experimental investigation and theoretical derivation to study the unified failure mechanism and ultimate capacity model of reinforced concrete (RC) members under combined axial, bending, shear and torsion loading. Fifteen RC members are tested under different combinations of compressive axial force, bending, shear and torsion using experimental equipment designed by the authors. The failure mechanism and ultimate strength data for the four groups of tested RC members under different combined loading conditions are investigated and discussed in detail. The experimental research seeks to determine how the ultimate strength of RC members changes with changing combined loads. According to the experimental research, a unified theoretical model is established by determining the shape of the warped failure surface, assuming an appropriate stress distribution on the failure surface, and considering the equilibrium conditions. This unified failure model can be reasonably and systematically changed into well-known failure theories of concrete members under single or combined loading. The unified calculation model could be easily used in design applications with some assumptions and simplifications. Finally, the accuracy of this theoretical unified model is verified by comparisons with experimental results. PMID:28414777

  12. Revisiting competition in a classic model system using formal links between theory and data.

    PubMed

    Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J

    2012-09-01

    Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.

  13. A unified model of Hymenopteran preadaptations that trigger the evolutionary transition to eusociality

    PubMed Central

    Quiñones, Andrés E.; Pen, Ido

    2017-01-01

    Explaining the origin of eusociality, with strict division of labour between workers and reproductives, remains one of evolutionary biology’s greatest challenges. Specific combinations of genetic, behavioural and demographic traits in Hymenoptera are thought to explain their relatively high frequency of eusociality, but quantitative models integrating such preadaptations are lacking. Here we use mathematical models to show that the joint evolution of helping behaviour and maternal sex ratio adjustment can synergistically trigger both a behavioural change from solitary to eusocial breeding, and a demographic change from a life cycle with two reproductive broods to a life cycle in which an unmated cohort of female workers precedes a final generation of dispersing reproductives. Specific suits of preadaptations are particularly favourable to the evolution of eusociality: lifetime monogamy, bivoltinism with male generation overlap, hibernation of mated females and haplodiploidy with maternal sex ratio adjustment. The joint effects of these preadaptations may explain the abundance of eusociality in the Hymenoptera and its virtual absence in other haplodiploid lineages. PMID:28643786

  14. A unified approach to fluid-flow, geomechanical, and seismic modelling

    NASA Astrophysics Data System (ADS)

    Yarushina, Viktoriya; Minakov, Alexander

    2016-04-01

    The perturbations of pore pressure can generate seismicity. This is supported by observations from human activities that involve fluid injection into rocks at high pressure (hydraulic fracturing, CO2 storage, geothermal energy production) and natural examples such as volcanic earthquakes. Although the seismic signals that emerge during geotechnical operations are small both in amplitude and duration when compared to natural counterparts. A possible explanation for the earthquake source mechanism is based on a number of in situ stress measurements suggesting that the crustal rocks are close to its plastic yield limit. Hence, a rapid increase of the pore pressure decreases the effective normal stress, and, thus, can trigger seismic shear deformation. At the same time, little attention has been paid to the fact that the perturbation of fluid pressure itself represents an acoustic source. Moreover, non-double-couple source mechanisms are frequently reported from the analysis of microseismicity. A consistent formulation of the source mechanism describing microseismic events should include both a shear and isotropic component. Thus, improved understanding of the interaction between fluid flow and seismic deformation is needed. With this study we aim to increase the competence in integrating real-time microseismic monitoring with geomechanical modelling such that there is a feedback loop between monitored deformation and stress field modelling. We propose fully integrated seismic, geomechanical and reservoir modelling. Our mathematical formulation is based on fundamental set of force balance, mass balance, and constitutive poro-elastoplastic equations for two-phase media consisting of deformable solid rock frame and viscous fluid. We consider a simplified 1D modelling setup for consistent acoustic source and wave propagation in poro-elastoplastic media. In this formulation the seismic wave is generated due to local changes of the stress field and pore pressure induced by e.g. fault generation or strain localization. This approach gives unified framework to characterize microseismicity of both class-I (pressure induced) and class-II (stress triggered) type of events. We consider two modelling setups. In the first setup the event is located within the reservoir and associated with pressure/stress drop due to fracture initiation. In the second setup we assume that seismic wave from a distant source hits a reservoir. The unified formulation of poro-elastoplastic deformation allows us to link the macroscopic stresses to local seismic instability.

  15. Review and Implementation Status of Prior Defense Business Board Recommendations

    DTIC Science & Technology

    2007-04-01

    Resource Management • Support unified models for shared services , and be prepared to adjust forward approaches for a Unified Medical Command...models for shared services – including by and between Veterans Affairs and Defense, electronic information exchange, disease treatment and prevention...www.dod.mil/dbb/pdf/DBB- Report-on-the-Military.pdf. • Continue to support unified models for shared services – including by and between Veterans Affairs

  16. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    PubMed

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average prediction precision was 79.6%. Also, we showed the superiority of our proposed model in terms of both topic modeling performance and recommendation performance compared to two related topic models such as polylingual topic model and bilingual topic model.

  17. Unified approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Chang, Tyne-Hsien

    1995-07-01

    A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.

  18. Unified approach for incompressible flows

    NASA Technical Reports Server (NTRS)

    Chang, Tyne-Hsien

    1995-01-01

    A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.

  19. Dimensions and geometry of the temporomandibular joint and masseter muscles.

    PubMed

    Zurowski, R; Gosek, M; Aleksandrowicz, R

    1976-01-01

    The bio-engineering team presents its suggestion of a method for the measurement of the temporomandibular joint and masseter muscles in order to determine the parameters necessary for exact sciences and indispensable for unified and objective cognitive studies. Ten formalin-fixed human cadavers served for the studies. The preparations were prepared by the modified method of anatomical procedure. Linear and angular measurements of temporomandibular joint and masseter muscles were carried out with the use of the three-dimensional Cartesian system of OXYZ coordinates in relation to frontal, sagittal and horizontal planes. The physiological cross-sections of the masseter, temporal, lateral and medial pterygoid muscles were also determined. The collected data make it possible to develop a mathematical three-dimensioned model of the osseo-articulo-muscular system of the mastication organ.

  20. New theories of relativistic hydrodynamics in the LHC era

    NASA Astrophysics Data System (ADS)

    Florkowski, Wojciech; Heller, Michal P.; Spaliński, Michał

    2018-04-01

    The success of relativistic hydrodynamics as an essential part of the phenomenological description of heavy-ion collisions at RHIC and the LHC has motivated a significant body of theoretical work concerning its fundamental aspects. Our review presents these developments from the perspective of the underlying microscopic physics, using the language of quantum field theory, relativistic kinetic theory, and holography. We discuss the gradient expansion, the phenomenon of hydrodynamization, as well as several models of hydrodynamic evolution equations, highlighting the interplay between collective long-lived and transient modes in relativistic matter. Our aim to provide a unified presentation of this vast subject—which is naturally expressed in diverse mathematical languages—has also led us to include several new results on the large-order behaviour of the hydrodynamic gradient expansion.

  1. Numerical Analysis of the Heat Transfer Characteristics within an Evaporating Meniscus

    NASA Astrophysics Data System (ADS)

    Ball, Gregory

    A numerical analysis was performed as to investigate the heat transfer characteristics of an evaporating thin-film meniscus. A mathematical model was used in the formulation of a third order ordinary differential equation. This equation governs the evaporating thin-film through use of continuity, momentum, energy equations and the Kelvin-Clapeyron model. This governing equation was treated as an initial value problem and was solved numerically using a Runge-Kutta technique. The numerical model uses varying thermophysical properties and boundary conditions such as channel width, applied superheat, accommodation coefficient and working fluid which can be tailored by the user. This work focused mainly on the effects of altering accommodation coefficient and applied superheat. A unified solution is also presented which models the meniscus to half channel width. The model was validated through comparison to literature values. In varying input values the following was determined; increasing superheat was found to shorten the film thickness and greatly increase the interfacial curvature overshoot values. The effect of decreasing accommodation coefficient lengthened the thin-film and retarded the evaporative effects.

  2. Dynamical and Mechanistic Reconstructive Approaches of T Lymphocyte Dynamics: Using Visual Modeling Languages to Bridge the Gap between Immunologists, Theoreticians, and Programmers

    PubMed Central

    Thomas-Vaslin, Véronique; Six, Adrien; Ganascia, Jean-Gabriel; Bersini, Hugues

    2013-01-01

    Dynamic modeling of lymphocyte behavior has primarily been based on populations based differential equations or on cellular agents moving in space and interacting each other. The final steps of this modeling effort are expressed in a code written in a programing language. On account of the complete lack of standardization of the different steps to proceed, we have to deplore poor communication and sharing between experimentalists, theoreticians and programmers. The adoption of diagrammatic visual computer language should however greatly help the immunologists to better communicate, to more easily identify the models similarities and facilitate the reuse and extension of existing software models. Since immunologists often conceptualize the dynamical evolution of immune systems in terms of “state-transitions” of biological objects, we promote the use of unified modeling language (UML) state-transition diagram. To demonstrate the feasibility of this approach, we present a UML refactoring of two published models on thymocyte differentiation. Originally built with different modeling strategies, a mathematical ordinary differential equation-based model and a cellular automata model, the two models are now in the same visual formalism and can be compared. PMID:24101919

  3. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model.

    PubMed

    Spühler, Jeannette H; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.

  4. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model

    PubMed Central

    Spühler, Jeannette H.; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework. PMID:29713288

  5. Introduction to Quantitative Science, a Ninth-Grade Laboratory-Centered Course Stressing Quantitative Observation and Mathematical Analysis of Experimental Results. Final Report.

    ERIC Educational Resources Information Center

    Badar, Lawrence J.

    This report, in the form of a teacher's guide, presents materials for a ninth grade introductory course on Introduction to Quantitative Science (IQS). It is intended to replace a traditional ninth grade general science with a process oriented course that will (1) unify the sciences, and (2) provide a quantitative preparation for the new science…

  6. Unified framework for information integration based on information geometry

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi

    2016-01-01

    Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289

  7. Enhancement of the Acquisition Process for a Combat System-A Case Study to Model the Workflow Processes for an Air Defense System Acquisition

    DTIC Science & Technology

    2009-12-01

    Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture

  8. Multiscale geometric modeling of macromolecules II: Lagrangian representation

    PubMed Central

    Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599

  9. Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.

    PubMed

    Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning

    2016-10-01

    To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.

  10. Universal Darwinism As a Process of Bayesian Inference.

    PubMed

    Campbell, John O

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  11. Universal Darwinism As a Process of Bayesian Inference

    PubMed Central

    Campbell, John O.

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438

  12. Methanosarcina as the dominant aceticlastic methanogens during mesophilic anaerobic digestion of putrescible waste.

    PubMed

    Vavilin, Vasily A; Qu, Xian; Mazéas, Laurent; Lemunier, Melanie; Duquennoi, Christian; He, Pinjing; Bouchez, Theodore

    2008-11-01

    Taking into account isotope (13)C value a mathematical model was developed to describe the dynamics of methanogenic population during mesophilic anaerobic digestion of putrescible solid waste and waste imitating Chinese municipal solid waste. Three groups of methanogens were considered in the model including unified hydrogenotrophic methanogens and two aceticlastic methanogens Methanosaeta sp. and Methanosarcina sp. It was assumed that Methanosaeta sp. and Methanosarcina sp. are inhibited by high volatile fatty acids concentration. The total organic and inorganic carbon concentrations, methane production, methane and carbon dioxide partial pressures as well as the isotope (13)C incorporation in PSW and CMSW were used for the model calibration and validation. The model showed that in spite of the high initial biomass concentration of Methanosaeta sp. Methanosarcina sp. became the dominant aceticlastic methanogens in the system. This prediction was confirmed by FISH. It is concluded that Methanosarcina sp. forming multicellular aggregates may resist to inhibition by volatile fatty acids (VFAs) because a slow diffusion rate of the acids limits the VFA concentrations inside the Methanosarcina sp. aggregates.

  13. Can quantum approaches benefit biology of decision making?

    PubMed

    Takahashi, Taiki

    2017-11-01

    Human decision making has recently been focused in the emerging fields of quantum decision theory and neuroeconomics. The former discipline utilizes mathematical formulations developed in quantum theory, while the latter combines behavioral economics and neurobiology. In this paper, the author speculates on possible future directions unifying the two approaches, by contrasting the roles of quantum theory in the birth of molecular biology of the gene. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Flexibility evaluation of multiechelon supply chains.

    PubMed

    Almeida, João Flávio de Freitas; Conceição, Samuel Vieira; Pinto, Luiz Ricardo; de Camargo, Ricardo Saraiva; Júnior, Gilberto de Miranda

    2018-01-01

    Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution.

  15. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  16. Flexibility evaluation of multiechelon supply chains

    PubMed Central

    Conceição, Samuel Vieira; Pinto, Luiz Ricardo; de Camargo, Ricardo Saraiva; Júnior, Gilberto de Miranda

    2018-01-01

    Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution. PMID:29584755

  17. Next Generation Community Based Unified Global Modeling System Development and Operational Implementation Strategies at NCEP

    NASA Astrophysics Data System (ADS)

    Tallapragada, V.

    2017-12-01

    NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.

  18. Unified approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Chang, Tyne-Hsien

    1993-12-01

    An unified approach for solving both compressible and incompressible flows was investigated in this study. The difference in CFD code development between incompressible and compressible flows is due to the mathematical characteristics. However, if one can modify the continuity equation for incompressible flows by introducing pseudocompressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of a compressible flow code to solve incompressible flows becomes feasible. Among numerical algorithms developed for compressible flows, the Centered Total Variation Diminishing (CTVD) schemes possess better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that CTVD schemes can equally well solve incompressible flows. In this study, the governing equations for incompressible flows include the continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the boundary conditions including physical and numerical boundary conditions must be properly specified to obtain accurate solution. The CFD code for this research is currently in progress. Flow past a circular cylinder will be used for numerical experiments to determine the accuracy and efficiency of the code before applying this code to more specific applications.

  19. A unified approach to the analysis and design of elasto-plastic structures with mechanical contact

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Olhoff, Niels; Taylor, John E.

    1990-01-01

    With structural design in mind, a new unified variational model has been developed which represents the mechanics of deformation elasto-plasticity with unilateral contact conditions. For a design problem formulated as maximization of the load carrying capacity of a structure under certain constraints, the unified model allows for a simultaneous analysis and design synthesis for a whole range of mechanical behavior.

  20. Position control of an industrial robot using fractional order controller

    NASA Astrophysics Data System (ADS)

    Clitan, Iulia; Muresan, Vlad; Abrudean, Mihail; Clitan, Andrei; Miron, Radu

    2017-02-01

    This paper presents the design of a control structure that ensures no overshoot for the movement of an industrial robot, used for the evacuation of round steel blocks from inside a rotary hearth furnace. First, a mathematical model for the positioning system is derived from a set of experimental data, and further, the paper focuses on obtaining a PID type controller, using the relay method as tuning method in order to obtain a stable closed loop system. The controller parameters are further tuned in order to achieve the imposed set of performances for the positioning of the industrial robot through computer simulation, using trial and error method. Further, a fractional - order PID controller is obtained in order to improve the control signal variation, so as to fit within the range of unified current's variation, 4 to 20 mA.

  1. Critical Behaviors in Contagion Dynamics.

    PubMed

    Böttcher, L; Nagler, J; Herrmann, H J

    2017-02-24

    We study the critical behavior of a general contagion model where nodes are either active (e.g., with opinion A, or functioning) or inactive (e.g., with opinion B, or damaged). The transitions between these two states are determined by (i) spontaneous transitions independent of the neighborhood, (ii) transitions induced by neighboring nodes, and (iii) spontaneous reverse transitions. The resulting dynamics is extremely rich including limit cycles and random phase switching. We derive a unifying mean-field theory. Specifically, we analytically show that the critical behavior of systems whose dynamics is governed by processes (i)-(iii) can only exhibit three distinct regimes: (a) uncorrelated spontaneous transition dynamics, (b) contact process dynamics, and (c) cusp catastrophes. This ends a long-standing debate on the universality classes of complex contagion dynamics in mean field and substantially deepens its mathematical understanding.

  2. E-HOSPITAL - A Digital Workbench for Hospital Operations and Services Planning Using Information Technology and Algebraic Languages.

    PubMed

    Gartner, Daniel; Padman, Rema

    2017-01-01

    In this paper, we describe the development of a unified framework and a digital workbench for the strategic, tactical and operational hospital management plan driven by information technology and analytics. The workbench can be used not only by multiple stakeholders in the healthcare delivery setting, but also for pedagogical purposes on topics such as healthcare analytics, services management, and information systems. This tool combines the three classical hierarchical decision-making levels in one integrated environment. At each level, several decision problems can be chosen. Extensions of mathematical models from the literature are presented and incorporated into the digital platform. In a case study using real-world data, we demonstrate how we used the workbench to inform strategic capacity planning decisions in a multi-hospital, multi-stakeholder setting in the United Kingdom.

  3. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Automated speech understanding: the next generation

    NASA Astrophysics Data System (ADS)

    Picone, J.; Ebel, W. J.; Deshmukh, N.

    1995-04-01

    Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.

  5. Unified Plant Growth Model (UPGM). 1. Background, objectives, and vision.

    USDA-ARS?s Scientific Manuscript database

    Since the development of the Environmental Policy Integrated Climate (EPIC) model in 1988, the EPIC-based plant growth code has been incorporated and modified into many agro-ecosystem models. The goals of the Unified Plant Growth Model (UPGM) project are: 1) integrating into one platform the enhance...

  6. Catastrophe Theory: A Unified Model for Educational Change.

    ERIC Educational Resources Information Center

    Cryer, Patricia; Elton, Lewis

    1990-01-01

    Catastrophe Theory and Herzberg's theory of motivation at work was used to create a model of change that unifies and extends Lewin's two separate stage and force field models. This new model is used to analyze the behavior of academics as they adapt to the changing university environment. (Author/MLW)

  7. The diversity and unit of reactor noise theory

    NASA Astrophysics Data System (ADS)

    Kuang, Zhifeng

    The study of reactor noise theory concerns questions about cause and effect relationships, and utilisation of random noise in nuclear reactor systems. The diversity of reactor noise theory arises from the variety of noise sources, the various mathematical treatments applied and various practical purposes. The neutron noise in zero- energy systems arises from the fluctuations in the number of neutrons per fission, the time between nuclear events, and the type of reactions. It can be used to evaluate system parameters. The mathematical treatment is based on the master equation of stochastic branching processes. The noise in power reactor systems is given rise by random processes of technological origin such as vibration of mechanical parts, boiling of the coolant, fluctuations of temperature and pressure. It can be used to monitor reactor behaviour with the possibility of detecting malfunctions at an early stage. The mathematical treatment is based on the Langevin equation. The unity of reactor noise theory arises from the fact that useful information from noise is embedded in the second moments of random variables, which lends the possibility of building up a unified mathematical description and analysis of the various reactor noise sources. Exploring such possibilities is the main subject among the three major topics reported in this thesis. The first subject is within the zero power noise in steady media, and we reported on the extension of the existing theory to more general cases. In Paper I, by use of the master equation approach, we have derived the most general Feynman- and Rossi-alpha formulae so far by taking the full joint statistics of the prompt and all the six groups of delayed neutron precursors, and a multiple emission source into account. The involved problems are solved with a combination of effective analytical techniques and symbolic algebra codes (Mathematica). Paper II gives a numerical evaluation of these formulae. An assessment of the contribution of the terms that are novel as compared to the traditional formulae has been made. The second subject treats a problem in power reactor noise with the Langevin formalism. With a very few exceptions, in all previous work the diffusion approximation was used. In order to extend the treatment to transport theory, in Paper III, we introduced a novel method, i.e. Padé approximation via Lanczos algorithm to calculate the transfer function of a finite slab reactor described by one-group transport equation. It was found that the local-global decomposition of the neutron noise, formerly only reproduced in at least 2- group theory, can be reconstructed. We have also showed the existence of a boundary layer of the neutron noise close to the boundary. Finally, we have explored the possibility of building up a unified theory to account for the coexistence of zero power and power reactor noise in a system. In Paper IV, a unified description of the neutron noise is given by the use of backward master equations in a model where the cross section fluctuations are given as a simple binary pseudorandom process. The general solution contains both the zero power and power reactor noise concurrently, and they can be extracted individually as limiting cases of the general solution. It justified the separate treatments of zero power and power reactor noise. The result was extended to the case including one group of delayed neutron precursors in Paper V.

  8. Universal Parameter Measurement and Sensorless Vector Control of Induction and Permanent Magnet Synchronous Motors

    NASA Astrophysics Data System (ADS)

    Yamamoto, Shu; Ara, Takahiro

    Recently, induction motors (IMs) and permanent-magnet synchronous motors (PMSMs) have been used in various industrial drive systems. The features of the hardware device used for controlling the adjustable-speed drive in these motors are almost identical. Despite this, different techniques are generally used for parameter measurement and speed-sensorless control of these motors. If the same technique can be used for parameter measurement and sensorless control, a highly versatile adjustable-speed-drive system can be realized. In this paper, the authors describe a new universal sensorless control technique for both IMs and PMSMs (including salient pole and nonsalient pole machines). A mathematical model applicable for IMs and PMSMs is discussed. Using this model, the authors derive the proposed universal sensorless vector control algorithm on the basis of estimation of the stator flux linkage vector. All the electrical motor parameters are determined by a unified test procedure. The proposed method is implemented on three test machines. The actual driving test results demonstrate the validity of the proposed method.

  9. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  10. Knot invariants and M-theory: Proofs and derivations

    NASA Astrophysics Data System (ADS)

    Errasti Díez, Verónica

    2018-01-01

    We construct two distinct yet related M-theory models that provide suitable frameworks for the study of knot invariants. We then focus on the four-dimensional gauge theory that follows from appropriately compactifying one of these M-theory models. We show that this theory has indeed all required properties to host knots. Our analysis provides a unifying picture of the various recent works that attempt an understanding of knot invariants using techniques of four-dimensional physics. This is a companion paper to K. Dasgupta, V. Errasti Díez, P. Ramadevi, and R. Tatar, Phys. Rev. D 95, 026010 (2017), 10.1103/PhysRevD.95.026010, covering all but Sec. III C. It presents a detailed mathematical derivation of the main results there, as well as additional material. Among the new insights, those related to supersymmetry and the topological twist are highlighted. This paper offers an alternative, complementary formulation of the contents in the first paper, but is self-contained and can be read independently.

  11. Generalized Lorenz equations on a three-sphere

    NASA Astrophysics Data System (ADS)

    Saiki, Yoshitaka; Sander, Evelyn; Yorke, James A.

    2017-06-01

    Edward Lorenz is best known for one specific three-dimensional differential equation, but he actually created a variety of related N-dimensional models. In this paper, we discuss a unifying principle for these models and put them into an overall mathematical framework. Because this family of models is so large, we are forced to choose. We sample the variety of dynamics seen in these models, by concentrating on a four-dimensional version of the Lorenz models for which there are three parameters and the norm of the solution vector is preserved. We can therefore restrict our focus to trajectories on the unit sphere S 3 in ℝ4. Furthermore, we create a type of Poincaré return map. We choose the Poincaré surface to be the set where one of the variables is 0, i.e., the Poincaré surface is a two-sphere S 2 in ℝ3. Examining different choices of our three parameters, we illustrate the wide variety of dynamical behaviors, including chaotic attractors, period doubling cascades, Standard-Map-like structures, and quasiperiodic trajectories. Note that neither Standard-Map-like structure nor quasiperiodicity has previously been reported for Lorenz models.

  12. Modeling microbial products in activated sludge under feast-famine conditions.

    PubMed

    Ni, Bing-Jie; Fang, Fang; Rittmann, Bruce E; Yu, Han-Qing

    2009-04-01

    We develop an expanded unified model that integrates production and consumption of internal storage products (X(STO)) into a unified model for extracellular polymeric substances (EPS), soluble microbial products (SMP), and active and inert biomass in activated sludge. We also conducted independent experiments to find needed parameter values and to test the ability of the expanded unified model to describe all the microbial products, along with original substrate and oxygen uptake. The model simulations match all experimental measurements and provide insights into the dynamics of soluble and solid components in activated sludge exposed to dynamic feast-and-famine conditions in two batch experiments and in one cycle of a sequencing batch reactor. In particular, the model illustrates how X(STO) cycles up and down rapidly during feast and famine periods, while EPS and biomass components are relatively stable despite feast and famine. The agreement between model outputs and experimental EPS, SMP, and X(STO) data from distinctly different experiments supports that the expanded unified model properly captures the relationships among the forms of microbial products.

  13. Hard X-ray tests of the unified model for an ultraviolet-detected sample of Seyfert 2 galaxies

    NASA Technical Reports Server (NTRS)

    Mulchaey, John S.; Myshotzky, Richard F.; Weaver, Kimberly A.

    1992-01-01

    An ultraviolet-detected sample of Seyfert 2 galaxies shows heavy photoelectric absorption in the hard X-ray band. The presence of UV emission combined with hard X-ray absorption argues strongly for a special geometry which must have the general properties of the Antonucci and Miller unified model. The observations of this sample are consistent with the picture in which the hard X-ray photons are viewed directly through the obscuring matter (molecular torus?) and the optical, UV, and soft X-ray continuum are seen in scattered light. The large range in X-ray column densities implies that there must be a large variation in intrinsic thicknesses of molecular tori, an assumption not found in the simplest of unified models. Furthermore, constraints based on the cosmic X-ray background suggest that some of the underlying assumptions of the unified model are wrong.

  14. Backpropagation and ordered derivatives in the time scales calculus.

    PubMed

    Seiffertt, John; Wunsch, Donald C

    2010-08-01

    Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of inter-disciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research.

  15. [Mathematics, natural sciences and technology--parts of the encyclopedia Die Kultur der Gegenwart (The culture of today)].

    PubMed

    Tobies, Renate

    2008-03-01

    The paper explores the trend of the early 20th century to consolidate mathematics, natural sciences, medicine and technology under the umbrella of one integrative culture--a tendency which contrasts with the increasing mainstream trend of separating the humanities from the natural sciences. The unifying umbrella was framed by the great encyclopedia Die Kultur der Gegenwart which was published by B. G. Teubner from 1905 to 1925 and was planned to run up to 62 volumes. We analyze the quantitative rate of the parts devoted to the humanities, the natural sciences and technology, respectively, the degree to which these parts were completed in this encyclopedia. In particular, we investigate the role of mathematicians and their reasons to find a classification for the mathematical, natural scientific and engineering parts of culture as well as their reasons, to win Nobel prize winners and other famous scientists to become co-editors and authors. We examine the published volumes in the fields of mathematics, chemistry, physics, astronomy and technology in order to show what type of publication--professional or popular--was intended. Furthermore, we illuminate how the educational reform of mathematics, natural sciences and technology of this period--which included a reform of girls' and women's education--was reflected in the encyclopedia Die Kultur der Gegenwart.

  16. Strain rate dependent hyperelastic stress-stretch behavior of a silica nanoparticle reinforced poly (ethylene glycol) diacrylate nanocomposite hydrogel.

    PubMed

    Zhan, Yuexing; Pan, Yihui; Chen, Bing; Lu, Jian; Zhong, Zheng; Niu, Xinrui

    2017-11-01

    Poly (ethylene glycol) diacrylate (PEGDA) derivatives are important biomedical materials. PEGDA based hydrogels have emerged as one of the popular regenerative orthopedic materials. This work aims to study the mechanical behavior of a PEGDA based silica nanoparticle (NP) reinforced nanocomposite (NC) hydrogel at physiological strain rates. The work combines materials fabrication, mechanical experiments, mathematical modeling and structural analysis. The strain rate dependent stress-stretch behaviors were observed, analyzed and quantified. Visco-hyperelasticity was identified as the deformation mechanism of the nano-silica/PEGDA NC hydrogel. NPs showed significant effect on both initial shear modulus and viscoelastic materials properties. A structure-based quasi-linear viscoelastic (QLV) model was constructed and capable to describe the visco-hyperelastic stress-stretch behavior of the NC hydrogel. A group of unified material parameters was extracted by the model from the stress-stretch curves obtained at different strain rates. Visco-hyperelastic behavior of NP/polymer interphase was not only identified but also quantified. The work could provide guidance to the structural design of next-generation NC hydrogel. Copyright © 2017. Published by Elsevier Ltd.

  17. Description of waves in inhomogeneous domains using Heun's equation

    NASA Astrophysics Data System (ADS)

    Bednarik, M.; Cervenka, M.

    2018-04-01

    There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.

  18. Evidence accumulation in decision making: unifying the "take the best" and the "rational" models.

    PubMed

    Lee, Michael D; Cummins, Tarrant D R

    2004-04-01

    An evidence accumulation model of forced-choice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequential-sampling process that terminates as soon as any evidence in favor of a decision is found and the rational approach as a sequential-sampling process that terminates only when all available information has been assessed. The unified TTB and RAT models were tested in an experiment in which participants learned to make correct judgments for a set of real-world stimuli on the basis of feedback, and were then asked to make additional judgments without feedback for cases in which the TTB and the rational models made different predictions. The results show that, in both experiments, there was strong intraparticipant consistency in the use of either the TTB or the rational model but large interparticipant differences in which model was used. The unified model is shown to be able to capture the differences in decision making across participants in an interpretable way and is preferred by the minimum description length model selection criterion.

  19. Dynamic Cognitive Tracing: Towards Unified Discovery of Student and Cognitive Models

    ERIC Educational Resources Information Center

    Gonzalez-Brenes, Jose P.; Mostow, Jack

    2012-01-01

    This work describes a unified approach to two problems previously addressed separately in Intelligent Tutoring Systems: (i) Cognitive Modeling, which factorizes problem solving steps into the latent set of skills required to perform them; and (ii) Student Modeling, which infers students' learning by observing student performance. The practical…

  20. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  1. Unified phenology model with Bayesian calibration for several European species in Belgium

    NASA Astrophysics Data System (ADS)

    Fu, Y. S. H.; Demarée, G.; Hamdi, R.; Deckmyn, A.; Deckmyn, G.; Janssens, I. A.

    2009-04-01

    Plant phenology is a good bio-indicator for climate change, and this has brought a significant increase of interest. Many kinds of phenology models have been developed to analyze and predict the phenological response to climate change, and those models have been summarized into one kind of unified model, which could be applied to different species and environments. In our study, we selected seven European woody plant species (Betula verrucosa, Quercus robur pedunculata, Fagus sylvatica, Fraxinus excelsior, Symphoricarpus racemosus, Aesculus hippocastanum, Robinia pseudoacacia) occurring in five sites distributed across Belgium. For those sites and tree species, phenological observations such as bud burst were available for the period 1956 - 2002. We also obtained regional downscaled climatic data for each of these sites, and combined both data sets to test the unified model. We used a Bayesian approach to generate distributions of model parameters from the observation data. In this poster presentation, we compare parameter distributions between different species and between different sites for individual species. The results of the unified model show a good agreement with the observations, except for Fagus sylvatica. The failure to reproduce the bud burst data for Fagus sylvatica suggest that the other factors not included in the unified model affect the phenology of this species. The parameter series show differences among species as we expected. However, they also differed strongly for the same species among sites.Further work should elucidate the mechanism that explains why model parameters differ among species and sites.

  2. Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model

    ERIC Educational Resources Information Center

    Helie, Sebastien; Sun, Ron

    2010-01-01

    This article proposes a unified framework for understanding creative problem solving, namely, the explicit-implicit interaction theory. This new theory of creative problem solving constitutes an attempt at providing a more unified explanation of relevant phenomena (in part by reinterpreting/integrating various fragmentary existing theories of…

  3. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  4. Creating a library holding group: an approach to large system integration.

    PubMed

    Huffman, Isaac R; Martin, Heather J; Delawska-Elliott, Basia

    2016-10-01

    Faced with resource constraints, many hospital libraries have considered joint operations. This case study describes how Providence Health & Services created a single group to provide library services. Using a holding group model, staff worked to unify more than 6,100 nonlibrary subscriptions and 14 internal library sites. Our library services grew by unifying 2,138 nonlibrary subscriptions and 11 library sites and hiring more library staff. We expanded access to 26,018 more patrons. A model with built-in flexibility allowed successful library expansion. Although challenges remain, this success points to a viable model of unified operations.

  5. Dimensional analysis yields the general second-order differential equation underlying many natural phenomena: the mathematical properties of a phenomenon's data plot then specify a unique differential equation for it.

    PubMed

    Kepner, Gordon R

    2014-08-27

    This study uses dimensional analysis to derive the general second-order differential equation that underlies numerous physical and natural phenomena described by common mathematical functions. It eschews assumptions about empirical constants and mechanisms. It relies only on the data plot's mathematical properties to provide the conditions and constraints needed to specify a second-order differential equation that is free of empirical constants for each phenomenon. A practical example of each function is analyzed using the general form of the underlying differential equation and the observable unique mathematical properties of each data plot, including boundary conditions. This yields a differential equation that describes the relationship among the physical variables governing the phenomenon's behavior. Complex phenomena such as the Standard Normal Distribution, the Logistic Growth Function, and Hill Ligand binding, which are characterized by data plots of distinctly different sigmoidal character, are readily analyzed by this approach. It provides an alternative, simple, unifying basis for analyzing each of these varied phenomena from a common perspective that ties them together and offers new insights into the appropriate empirical constants for describing each phenomenon.

  6. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  7. Unified Model for Academic Competence, Social Adjustment, and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    A unified conceptual model is needed to integrate the extensive research on (1) social competence and adaptive behavior, (2) converging conceptualizations of social adjustment and psychopathology, and (3) emerging concepts and measures of academic competence. To develop such a model, a study was conducted in which teacher ratings were collected on…

  8. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  9. Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications.

    PubMed

    Fulop, Sean A; Fitz, Kelly

    2006-01-01

    A modification of the spectrogram (log magnitude of the short-time Fourier transform) to more accurately show the instantaneous frequencies of signal components was first proposed in 1976 [Kodera et al., Phys. Earth Planet. Inter. 12, 142-150 (1976)], and has been considered or reinvented a few times since but never widely adopted. This paper presents a unified theoretical picture of this time-frequency analysis method, the time-corrected instantaneous frequency spectrogram, together with detailed implementable algorithms comparing three published techniques for its computation. The new representation is evaluated against the conventional spectrogram for its superior ability to track signal components. The lack of a uniform framework for either mathematics or implementation details which has characterized the disparate literature on the schemes has been remedied here. Fruitful application of the method is shown in the realms of speech phonation analysis, whale song pitch tracking, and additive sound modeling.

  10. Physiological utility theory and the neuroeconomics of choice

    PubMed Central

    Glimcher, Paul W.; Dorris, Michael C.; Bayer, Hannah M.

    2006-01-01

    Over the past half century economists have responded to the challenges of Allais [Econometrica (1953) 53], Ellsberg [Quart. J. Econ. (1961) 643] and others raised to neoclassicism either by bounding the reach of economic theory or by turning to descriptive approaches. While both of these strategies have been enormously fruitful, neither has provided a clear programmatic approach that aspires to a complete understanding of human decision making as did neoclassicism. There is, however, growing evidence that economists and neurobiologists are now beginning to reveal the physical mechanisms by which the human neuroarchitecture accomplishes decision making. Although in their infancy, these studies suggest both a single unified framework for understanding human decision making and a methodology for constraining the scope and structure of economic theory. Indeed, there is already evidence that these studies place mathematical constraints on existing economic models. This article reviews some of those constraints and suggests the outline of a neuroeconomic theory of decision. PMID:16845435

  11. Determination of the aerosol size distribution by analytic inversion of the extinction spectrum in the complex anomalous diffraction approximation.

    PubMed

    Franssens, G; De Maziére, M; Fonteyn, D

    2000-08-20

    A new derivation is presented for the analytical inversion of aerosol spectral extinction data to size distributions. It is based on the complex analytic extension of the anomalous diffraction approximation (ADA). We derive inverse formulas that are applicable to homogeneous nonabsorbing and absorbing spherical particles. Our method simplifies, generalizes, and unifies a number of results obtained previously in the literature. In particular, we clarify the connection between the ADA transform and the Fourier and Laplace transforms. Also, the effect of the particle refractive-index dispersion on the inversion is examined. It is shown that, when Lorentz's model is used for this dispersion, the continuous ADA inverse transform is mathematically well posed, whereas with a constant refractive index it is ill posed. Further, a condition is given, in terms of Lorentz parameters, for which the continuous inverse operator does not amplify the error.

  12. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  13. Summing up the noise in gene networks

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2004-01-01

    Random fluctuations in genetic networks are inevitable as chemical reactions are probabilistic and many genes, RNAs and proteins are present in low numbers per cell. Such `noise' affects all life processes and has recently been measured using green fluorescent protein (GFP). Two studies show that negative feedback suppresses noise, and three others identify the sources of noise in gene expression. Here I critically analyse these studies and present a simple equation that unifies and extends both the mathematical and biological perspectives.

  14. Unified Science Approach K-12, Proficiency Levels 7-12.

    ERIC Educational Resources Information Center

    Oickle, Eileen M., Ed.

    Presented is the second part of the K-12 unified science materials used in the public schools of Anne Arundel County, Maryland. Detailed descriptions are made of the roles of students and teachers, purposes of the bibliography, major concepts in unified science, processes of inquiry, a scheme and model for scientific literacy, and program…

  15. Unified Science Approach K-12, Proficiency Levels 1-6.

    ERIC Educational Resources Information Center

    Oickle, Eileen M., Ed.

    Presented are first-revision materials of the K-12 unified science program implemented in the public schools of Anne Arundel County, Maryland. Detailed descriptions are given of the roles of students and teachers, purposes of bibliography, major concepts in unified science, processes of inquiry, scheme and model for scientific literacy, and…

  16. Mathematical Approach to Identification of Load Structure at the Nodes of the Distribution Grids 6-10 kV and 0.4 kV

    NASA Astrophysics Data System (ADS)

    Nizamutdinova, T.; Mukhlynin, N.

    2017-06-01

    A significant increasing energy efficiency of the full cycle of production, transmission and distribution of electricity in grids should be based on the management of separate consumers of electricity. The existing energy supply systems based on the concept of «smart things» do not allow to identify the technical structure of the electricity consumption in the load nodes from the grid side. It makes solving the tasks of energy efficiency more difficult. To solve this problem, the use of Wavelet transform to create a mathematical tool for monitoring the load composition in the nodes of the distribution grids of 6-10 kV, 0.4 kV is proposed in this paper. The authors have created a unique wavelet based functions for some consumers, based on their current consumption graphs of these power consumers. Possibility of determination of the characteristics of individual consumers of electricity in total nodal charts of load is shown in the test case. In future, creation of a unified technical and informational model of load control will allow to solve the problem of increasing the economic efficiency of not only certain consumers, but also the entire power supply system as a whole.

  17. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent; Gettelman, Andrew; Morrison, Hugh

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less

  18. Sandia/Stanford Unified Creep Plasticity Damage Model for ANSYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, David M.; Vianco, Paul T.; Fossum, Arlo F.

    2006-09-03

    A unified creep plasticity (UCP) model was developed, based upon the time-dependent and time-independent deformation properties of the 95.5Sn-3.9Ag-0.6Cu (wt.%) soldier that were measured at Sandia. Then, a damage parameter, D, was added to the equation to develop the unified creep plasticity damage (UCPD) model. The parameter, D, was parameterized, using data obtained at Sandia from isothermal fatigue experiments on a double-lap shear test. The softwae was validated against a BGA solder joint exposed to thermal cycling. The UCPD model was put into the ANSYS finite element as a subroutine. So, the softwae is the subroutine for ANSYS 8.1.

  19. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  20. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  1. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  2. A Note on the Problem of Proper Time in Weyl Space-Time

    NASA Astrophysics Data System (ADS)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-02-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.

  3. A model of olfactory associative learning

    NASA Astrophysics Data System (ADS)

    Tavoni, Gaia; Balasubramanian, Vijay

    We propose a mechanism, rooted in the known anatomy and physiology of the vertebrate olfactory system, by which presentations of rewarded and unrewarded odors lead to formation of odor-valence associations between piriform cortex (PC) and anterior olfactory nucleus (AON) which, in concert with neuromodulators release in the bulb, entrains a direct feedback from the AON representation of valence to a group of mitral cells (MCs). The model makes several predictions concerning MC activity during and after associative learning: (a) AON feedback produces synchronous divergent responses in a localized subset of MCs; (b) such divergence propagates to other MCs by lateral inhibition; (c) after learning, MC responses reconverge; (d) recall of the newly formed associations in the PC increases feedback inhibition in the MCs. These predictions have been confirmed in disparate experiments which we now explain in a unified framework. For cortex, our model further predicts that the response divergence developed during learning reshapes odor representations in the PC, with the effects of (a) decorrelating PC representations of odors with different valences, (b) increasing the size and reliability of those representations, and enabling recall correction and redundancy reduction after learning. Simons Foundation for Mathematical Modeling of Living Systems.

  4. Modern Fysics Phallacies: The Best Way Not to Unify Physics

    NASA Astrophysics Data System (ADS)

    Beichler, James E.

    Too many physicists believe the `phallacy' that the quantum is more fundamental than relativity without any valid supporting evidence, so the earliest attempts to unify physics based on the continuity of relativity have been all but abandoned. This belief is probably due to the wealth of pro-quantum propaganda and general `phallacies in fysics' that were spread during the second quarter of the twentieth century, although serious `phallacies' exist throughout physics on both sides of the debate. Yet both approaches are basically flawed because both relativity and the quantum theory are incomplete and grossly misunderstood as they now stand. Had either side of the quantum versus relativity controversy sought common ground between the two worldviews, total unification would have been accomplished long ago. The point is, literally, that the discrete quantum, continuous relativity, basic physical geometry, theoretical mathematics and classical physics all share one common characteristic that has never been fully explored or explained - a paradoxical duality between a dimensionless point (discrete) and an extended length (continuity) in any dimension - and if the problem of unification is approached from an understanding of how this paradox relates to each paradigm, all of physics and indeed all of science could be unified under a single new theoretical paradigm.

  5. Unified Theory for Decoding the Signals from X-Ray Florescence and X-Ray Diffraction of Mixtures.

    PubMed

    Chung, Frank H

    2017-05-01

    For research and development or for solving technical problems, we often need to know the chemical composition of an unknown mixture, which is coded and stored in the signals of its X-ray fluorescence (XRF) and X-ray diffraction (XRD). X-ray fluorescence gives chemical elements, whereas XRD gives chemical compounds. The major problem in XRF and XRD analyses is the complex matrix effect. The conventional technique to deal with the matrix effect is to construct empirical calibration lines with standards for each element or compound sought, which is tedious and time-consuming. A unified theory of quantitative XRF analysis is presented here. The idea is to cancel the matrix effect mathematically. It turns out that the decoding equation for quantitative XRF analysis is identical to that for quantitative XRD analysis although the physics of XRD and XRF are fundamentally different. The XRD work has been published and practiced worldwide. The unified theory derives a new intensity-concentration equation of XRF, which is free from the matrix effect and valid for a wide range of concentrations. The linear decoding equation establishes a constant slope for each element sought, hence eliminating the work on calibration lines. The simple linear decoding equation has been verified by 18 experiments.

  6. Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum

    ERIC Educational Resources Information Center

    Rubenstein, Lisa DaVia; Ridgley, Lisa M.

    2017-01-01

    A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.…

  7. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    PubMed

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  8. Unified Science Approach K-12, Proficiency Levels 13-21 and Semester Courses.

    ERIC Educational Resources Information Center

    Oickle, Eileen M., Ed.

    Presented is the third part of the K-12 unified science materials used in the public schools of Anne Arundel County, Maryland. Detailed descriptions are presented for the roles of students and teachers, purposes of bibliography, major concepts in unified science, processes of inquiry, scheme and model for scientific literacy, and program…

  9. Closed-form summations of Dowker's and related trigonometric sums

    NASA Astrophysics Data System (ADS)

    Cvijović, Djurdje; Srivastava, H. M.

    2012-09-01

    Through a unified and relatively simple approach which uses complex contour integrals, particularly convenient integration contours and calculus of residues, closed-form summation formulas for 12 very general families of trigonometric sums are deduced. One of them is a family of cosecant sums which was first summed in closed form in a series of papers by Dowker (1987 Phys. Rev. D 36 3095-101 1989 J. Math. Phys. 30 770-3 1992 J. Phys. A: Math. Gen. 25 2641-8), whose method has inspired our work in this area. All of the formulas derived here involve the higher-order Bernoulli polynomials. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.

  10. Geomatic Methods for the Analysis of Data in the Earth Sciences: Lecture Notes in Earth Sciences, Vol. 95

    NASA Astrophysics Data System (ADS)

    Pavlis, Nikolaos K.

    Geomatics is a trendy term that has been used in recent years to describe academic departments that teach and research theories, methods, algorithms, and practices used in processing and analyzing data related to the Earth and other planets. Naming trends aside, geomatics could be considered as the mathematical and statistical “toolbox” that allows Earth scientists to extract information about physically relevant parameters from the available data and accompany such information with some measure of its reliability. This book is an attempt to present the mathematical-statistical methods used in data analysis within various disciplines—geodesy, geophysics, photogrammetry and remote sensing—from a unifying perspective that inverse problem formalism permits. At the same time, it allows us to stretch the relevance of statistical methods in achieving an optimal solution.

  11. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    NASA Technical Reports Server (NTRS)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  12. A Global 3D P-Velocity Model of the Earth’s Crust and Mantle for Improved Event Location

    DTIC Science & Technology

    2011-09-01

    starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and Crust 2.0 model everywhere else, over a...geographic and radial dimensions. For our starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and...tessellation with 4° triangles to the transition zone and upper mantle, and a third tessellation with variable resolution to all crustal layers. The

  13. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    PubMed

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  15. Seismic waves and earthquakes in a global monolithic model

    NASA Astrophysics Data System (ADS)

    Roubíček, Tomáš

    2018-03-01

    The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.

  16. Physiologically motivated multiplex Kuramoto model describes phase diagram of cortical activity

    NASA Astrophysics Data System (ADS)

    Sadilek, Maximilian; Thurner, Stefan

    2015-05-01

    We derive a two-layer multiplex Kuramoto model from Wilson-Cowan type physiological equations that describe neural activity on a network of interconnected cortical regions. This is mathematically possible due to the existence of a unique, stable limit cycle, weak coupling, and inhibitory synaptic time delays. We study the phase diagram of this model numerically as a function of the inter-regional connection strength that is related to cerebral blood flow, and a phase shift parameter that is associated with synaptic GABA concentrations. We find three macroscopic phases of cortical activity: background activity (unsynchronized oscillations), epileptiform activity (highly synchronized oscillations) and resting-state activity (synchronized clusters/chaotic behaviour). Previous network models could hitherto not explain the existence of all three phases. We further observe a shift of the average oscillation frequency towards lower values together with the appearance of coherent slow oscillations at the transition from resting-state to epileptiform activity. This observation is fully in line with experimental data and could explain the influence of GABAergic drugs both on gamma oscillations and epileptic states. Compared to previous models for gamma oscillations and resting-state activity, the multiplex Kuramoto model not only provides a unifying framework, but also has a direct connection to measurable physiological parameters.

  17. The field representation language.

    PubMed

    Tsafnat, Guy

    2008-02-01

    The complexity of quantitative biomedical models, and the rate at which they are published, is increasing to a point where managing the information has become all but impossible without automation. International efforts are underway to standardise representation languages for a number of mathematical entities that represent a wide variety of physiological systems. This paper presents the Field Representation Language (FRL), a portable representation of values that change over space and/or time. FRL is an extensible mark-up language (XML) derivative with support for large numeric data sets in Hierarchical Data Format version 5 (HDF5). Components of FRL can be reused through unified resource identifiers (URI) that point to external resources such as custom basis functions, boundary geometries and numerical data. To demonstrate the use of FRL as an interchange we present three models that study hyperthermia cancer treatment: a fractal model of liver tumour microvasculature; a probabilistic model simulating the deposition of magnetic microspheres throughout it; and a finite element model of hyperthermic treatment. The microsphere distribution field was used to compute the heat generation rate field around the tumour. We used FRL to convey results from the microsphere simulation to the treatment model. FRL facilitated the conversion of the coordinate systems and approximated the integral over regions of the microsphere deposition field.

  18. Physiologically motivated multiplex Kuramoto model describes phase diagram of cortical activity.

    PubMed

    Sadilek, Maximilian; Thurner, Stefan

    2015-05-21

    We derive a two-layer multiplex Kuramoto model from Wilson-Cowan type physiological equations that describe neural activity on a network of interconnected cortical regions. This is mathematically possible due to the existence of a unique, stable limit cycle, weak coupling, and inhibitory synaptic time delays. We study the phase diagram of this model numerically as a function of the inter-regional connection strength that is related to cerebral blood flow, and a phase shift parameter that is associated with synaptic GABA concentrations. We find three macroscopic phases of cortical activity: background activity (unsynchronized oscillations), epileptiform activity (highly synchronized oscillations) and resting-state activity (synchronized clusters/chaotic behaviour). Previous network models could hitherto not explain the existence of all three phases. We further observe a shift of the average oscillation frequency towards lower values together with the appearance of coherent slow oscillations at the transition from resting-state to epileptiform activity. This observation is fully in line with experimental data and could explain the influence of GABAergic drugs both on gamma oscillations and epileptic states. Compared to previous models for gamma oscillations and resting-state activity, the multiplex Kuramoto model not only provides a unifying framework, but also has a direct connection to measurable physiological parameters.

  19. Resolving the biophysics of axon transmembrane polarization in a single closed-form description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melendy, Robert F., E-mail: rfmelendy@liberty.edu

    2015-12-28

    When a depolarizing event occurs across a cell membrane there is a remarkable change in its electrical properties. A complete depolarization event produces a considerably rapid increase in voltage that propagates longitudinally along the axon and is accompanied by changes in axial conductance. A dynamically changing magnetic field is associated with the passage of the action potential down the axon. Over 75 years of research has gone into the quantification of this phenomenon. To date, no unified model exist that resolves transmembrane polarization in a closed-form description. Here, a simple but formative description of propagated signaling phenomena in the membranemore » of an axon is presented in closed-form. The focus is on using both biophysics and mathematical methods for elucidating the fundamental mechanisms governing transmembrane polarization. The results presented demonstrate how to resolve electromagnetic and thermodynamic factors that govern transmembrane potential. Computational results are supported by well-established quantitative descriptions of propagated signaling phenomena in the membrane of an axon. The findings demonstrate how intracellular conductance, the thermodynamics of magnetization, and current modulation function together in generating an action potential in a unified closed-form description. The work presented in this paper provides compelling evidence that three basic factors contribute to the propagated signaling in the membrane of an axon. It is anticipated this work will compel those in biophysics, physical biology, and in the computational neurosciences to probe deeper into the classical and quantum features of membrane magnetization and signaling. It is hoped that subsequent investigations of this sort will be advanced by the computational features of this model without having to resort to numerical methods of analysis.« less

  20. Terrestrial carbon storage dynamics: Chasing a moving target

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Shi, Z.; Jiang, L.; Xia, J.; Wang, Y.; Kc, M.; Liang, J.; Lu, X.; Niu, S.; Ahlström, A.; Hararuk, O.; Hastings, A.; Hoffman, F. M.; Medlyn, B. E.; Rasmussen, M.; Smith, M. J.; Todd-Brown, K. E.; Wang, Y.

    2015-12-01

    Terrestrial ecosystems have been estimated to absorb roughly 30% of anthropogenic CO2 emissions. Past studies have identified myriad drivers of terrestrial carbon storage changes, such as fire, climate change, and land use changes. Those drivers influence the carbon storage change via diverse mechanisms, which have not been unified into a general theory so as to identify what control the direction and rate of terrestrial carbon storage dynamics. Here we propose a theoretical framework to quantitatively determine the response of terrestrial carbon storage to different exogenous drivers. With a combination of conceptual reasoning, mathematical analysis, and numeric experiments, we demonstrated that the maximal capacity of an ecosystem to store carbon is time-dependent and equals carbon input (i.e., net primary production, NPP) multiplying by residence time. The capacity is a moving target toward which carbon storage approaches (i.e., the direction of carbon storage change) but usually does not attain. The difference between the capacity and the carbon storage at a given time t is the unrealized carbon storage potential. The rate of the storage change is proportional to the magnitude of the unrealized potential. We also demonstrated that a parameter space of NPP, residence time, and carbon storage potential can well characterize carbon storage dynamics quantified at six sites ranging from tropical forests to tundra and simulated by two versions (carbon-only and coupled carbon-nitrogen) of the Australian Community Atmosphere-Biosphere Land Ecosystem (CABLE) Model under three climate change scenarios (CO2 rising only, climate warming only, and RCP8.5). Overall this study reveals the unified mechanism unerlying terrestrial carbon storage dynamics to guide transient traceability analysis of global land models and synthesis of empirical studies.

  1. Phase noise suppression for coherent optical block transmission systems: a unified framework.

    PubMed

    Yang, Chuanchuan; Yang, Feng; Wang, Ziyu

    2011-08-29

    A unified framework for phase noise suppression is proposed in this paper, which could be applied in any coherent optical block transmission systems, including coherent optical orthogonal frequency-division multiplexing (CO-OFDM), coherent optical single-carrier frequency-domain equalization block transmission (CO-SCFDE), etc. Based on adaptive modeling of phase noise, unified observation equations for different coherent optical block transmission systems are constructed, which lead to unified phase noise estimation and suppression. Numerical results demonstrate that the proposal is powerful in mitigating laser phase noise.

  2. Pattern Formation

    NASA Astrophysics Data System (ADS)

    Hoyle, Rebecca

    2006-03-01

    From the stripes of a zebra and the spots on a leopard's back to the ripples on a sandy beach or desert dune, regular patterns arise everywhere in nature. The appearance and evolution of these phenomena has been a focus of recent research activity across several disciplines. This book provides an introduction to the range of mathematical theory and methods used to analyse and explain these often intricate and beautiful patterns. Bringing together several different approaches, from group theoretic methods to envelope equations and theory of patterns in large-aspect ratio-systems, the book also provides insight behind the selection of one pattern over another. Suitable as an upper-undergraduate textbook for mathematics students or as a fascinating, engaging, and fully illustrated resource for readers in physics and biology, Rebecca Hoyle's book, using a non-partisan approach, unifies a range of techniques used by active researchers in this growing field. Accessible description of the mathematical theory behind fascinating pattern formation in areas such as biology, physics and materials science Collects recent research for the first time in an upper level textbook Features a number of exercises - with solutions online - and worked examples

  3. Stochastic theory of nonequilibrium steady states and its applications. Part I

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Juan; Qian, Hong; Qian, Min

    2012-01-01

    The concepts of equilibrium and nonequilibrium steady states are introduced in the present review as mathematical concepts associated with stationary Markov processes. For both discrete stochastic systems with master equations and continuous diffusion processes with Fokker-Planck equations, the nonequilibrium steady state (NESS) is characterized in terms of several key notions which are originated from nonequilibrium physics: time irreversibility, breakdown of detailed balance, free energy dissipation, and positive entropy production rate. After presenting this NESS theory in pedagogically accessible mathematical terms that require only a minimal amount of prerequisites in nonlinear differential equations and the theory of probability, it is applied, in Part I, to two widely studied problems: the stochastic resonance (also known as coherent resonance) and molecular motors (also known as Brownian ratchet). Although both areas have advanced rapidly on their own with a vast amount of literature, the theory of NESS provides them with a unifying mathematical foundation. Part II of this review contains applications of the NESS theory to processes from cellular biochemistry, ranging from enzyme catalyzed reactions, kinetic proofreading, to zeroth-order ultrasensitivity.

  4. A generalized theory of preferential linking

    NASA Astrophysics Data System (ADS)

    Hu, Haibo; Guo, Jinli; Liu, Xuan; Wang, Xiaofan

    2014-12-01

    There are diverse mechanisms driving the evolution of social networks. A key open question dealing with understanding their evolution is: How do various preferential linking mechanisms produce networks with different features? In this paper we first empirically study preferential linking phenomena in an evolving online social network, find and validate the linear preference. We propose an analyzable model which captures the real growth process of the network and reveals the underlying mechanism dominating its evolution. Furthermore based on preferential linking we propose a generalized model reproducing the evolution of online social networks, and present unified analytical results describing network characteristics for 27 preference scenarios. We study the mathematical structure of degree distributions and find that within the framework of preferential linking analytical degree distributions can only be the combinations of finite kinds of functions which are related to rational, logarithmic and inverse tangent functions, and extremely complex network structure will emerge even for very simple sublinear preferential linking. This work not only provides a verifiable origin for the emergence of various network characteristics in social networks, but bridges the micro individuals' behaviors and the global organization of social networks.

  5. A Unified Model Exploring Parenting Practices as Mediators of Marital Conflict and Children's Adjustment

    ERIC Educational Resources Information Center

    Coln, Kristen L.; Jordan, Sara S.; Mercer, Sterett H.

    2013-01-01

    We examined positive and negative parenting practices and psychological control as mediators of the relations between constructive and destructive marital conflict and children's internalizing and externalizing problems in a unified model. Married mothers of 121 children between the ages of 6 and 12 completed questionnaires measuring marital…

  6. Can (should) theories of crowding be unified?

    PubMed Central

    Agaoglu, Mehmet N.; Chung, Susana T. L.

    2016-01-01

    Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified. PMID:27936273

  7. An OpenACC-Based Unified Programming Model for Multi-accelerator Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.

  8. Modeling of nitrous oxide production by autotrophic ammonia-oxidizing bacteria with multiple production pathways.

    PubMed

    Ni, Bing-Jie; Peng, Lai; Law, Yingyu; Guo, Jianhua; Yuan, Zhiguo

    2014-04-01

    Autotrophic ammonia oxidizing bacteria (AOB) have been recognized as a major contributor to N2O production in wastewater treatment systems. However, so far N2O models have been proposed based on a single N2O production pathway by AOB, and there is still a lack of effective approach for the integration of these models. In this work, an integrated mathematical model that considers multiple production pathways is developed to describe N2O production by AOB. The pathways considered include the nitrifier denitrification pathway (N2O as the final product of AOB denitrification with NO2(-) as the terminal electron acceptor) and the hydroxylamine (NH2OH) pathway (N2O as a byproduct of incomplete oxidation of NH2OH to NO2(-)). In this model, the oxidation and reduction processes are modeled separately, with intracellular electron carriers introduced to link the two types of processes. The model is calibrated and validated using experimental data obtained with two independent nitrifying cultures. The model satisfactorily describes the N2O data from both systems. The model also predicts shifts of the dominating pathway at various dissolved oxygen (DO) and nitrite levels, consistent with previous hypotheses. This unified model is expected to enhance our ability to predict N2O production by AOB in wastewater treatment systems under varying operational conditions.

  9. On the reachable cycles via the unified perspective of cryocoolers. Part B: Cryocoolers with isentropic expanders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maytal, Ben-Zion; Pfotenhauer, John M.

    2014-01-29

    Solvay, Stirling and Gifford-McMahon types of cryocoolers employ an isentropic expander which is their elementary mechanism for temperature reduction (following the unified model of cryocoolers as described in a previous paper, Part A). Solvay and Stirling cryocoolers are driven by a larger temperature reduction than that of the Gifford-McMahon cycle, for a similar compression ratio. These cryocoolers are compared from the view of the unified model, in terms of the lowest attainable temperature, compression ratio, the size of the interchanger and the applied heat load.

  10. Unified viscoelasticity: Applying discrete element models to soft tissues with two characteristic times.

    PubMed

    Anssari-Benam, Afshin; Bucchi, Andrea; Bader, Dan L

    2015-09-18

    Discrete element models have often been the primary tool in investigating and characterising the viscoelastic behaviour of soft tissues. However, studies have employed varied configurations of these models, based on the choice of the number of elements and the utilised formation, for different subject tissues. This approach has yielded a diverse array of viscoelastic models in the literature, each seemingly resulting in different descriptions of viscoelastic constitutive behaviour and/or stress-relaxation and creep functions. Moreover, most studies do not apply a single discrete element model to characterise both stress-relaxation and creep behaviours of tissues. The underlying assumption for this disparity is the implicit perception that the viscoelasticity of soft tissues cannot be described by a universal behaviour or law, resulting in the lack of a unified approach in the literature based on discrete element representations. This paper derives the constitutive equation for different viscoelastic models applicable to soft tissues with two characteristic times. It demonstrates that all possible configurations exhibit a unified and universal behaviour, captured by a single constitutive relationship between stress, strain and time as: σ+Aσ̇+Bσ¨=Pε̇+Qε¨. The ensuing stress-relaxation G(t) and creep J(t) functions are also unified and universal, derived as [Formula: see text] and J(t)=c2+(ε0-c2)e(-PQt)+σ0Pt, respectively. Application of these relationships to experimental data is illustrated for various tissues including the aortic valve, ligament and cerebral artery. The unified model presented in this paper may be applied to all tissues with two characteristic times, obviating the need for employing varied configurations of discrete element models in preliminary investigation of the viscoelastic behaviour of soft tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Control of Distributed Parameter Systems

    DTIC Science & Technology

    1990-08-01

    vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of

  12. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    PubMed

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumbser, Michael, E-mail: michael.dumbser@unitn.it; Peshkov, Ilya, E-mail: peshkov@math.nsc.ru; Romenski, Evgeniy, E-mail: evrom@math.nsc.ru

    Highlights: • High order schemes for a unified first order hyperbolic formulation of continuum mechanics. • The mathematical model applies simultaneously to fluid mechanics and solid mechanics. • Viscous fluids are treated in the frame of hyper-elasticity as generalized visco-plastic solids. • Formal asymptotic analysis reveals the connection with the Navier–Stokes equations. • The distortion tensor A in the model appears to be well-suited for flow visualization. - Abstract: This paper is concerned with the numerical solution of the unified first order hyperbolic formulation of continuum mechanics recently proposed by Peshkov and Romenski [110], further denoted as HPR model. Inmore » that framework, the viscous stresses are computed from the so-called distortion tensor A, which is one of the primary state variables in the proposed first order system. A very important key feature of the HPR model is its ability to describe at the same time the behavior of inviscid and viscous compressible Newtonian and non-Newtonian fluids with heat conduction, as well as the behavior of elastic and visco-plastic solids. Actually, the model treats viscous and inviscid fluids as generalized visco-plastic solids. This is achieved via a stiff source term that accounts for strain relaxation in the evolution equations of A. Also heat conduction is included via a first order hyperbolic system for the thermal impulse, from which the heat flux is computed. The governing PDE system is hyperbolic and fully consistent with the first and the second principle of thermodynamics. It is also fundamentally different from first order Maxwell–Cattaneo-type relaxation models based on extended irreversible thermodynamics. The HPR model represents therefore a novel and unified description of continuum mechanics, which applies at the same time to fluid mechanics and solid mechanics. In this paper, the direct connection between the HPR model and the classical hyperbolic–parabolic Navier–Stokes–Fourier theory is established for the first time via a formal asymptotic analysis in the stiff relaxation limit. From a numerical point of view, the governing partial differential equations are very challenging, since they form a large nonlinear hyperbolic PDE system that includes stiff source terms and non-conservative products. We apply the successful family of one-step ADER–WENO finite volume (FV) and ADER discontinuous Galerkin (DG) finite element schemes to the HPR model in the stiff relaxation limit, and compare the numerical results with exact or numerical reference solutions obtained for the Euler and Navier–Stokes equations. Numerical convergence results are also provided. To show the universality of the HPR model, the paper is rounded-off with an application to wave propagation in elastic solids, for which one only needs to switch off the strain relaxation source term in the governing PDE system. We provide various examples showing that for the purpose of flow visualization, the distortion tensor A seems to be particularly useful.« less

  14. Finding the way with a noisy brain.

    PubMed

    Cheung, Allen; Vickerstaff, Robert

    2010-11-11

    Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.

  15. Bose Condensation and Lasing in Optical Microstructures - Part 1

    NASA Astrophysics Data System (ADS)

    Szymanska, M. H.

    2002-04-01

    In the first part of this thesis I study the intermediate regime between ordinary lasing and a BEC of exciton polaritons. I take into account the fermionic structure of polaritons, treating the excitons as two-level systems coupled to a single mode in a microcavity. I introduce decoherence and dissipation processes to this system. Employing many-body Green function techniques, similar to those used by Abrikosov and Gor'kov in their theory of gapless superconductivity, I provide a mathematical structure that unifies models of lasers with models of condensates. This allows me to study the stability of the polariton condensate with respect to decoherence processes and the crossover between the polariton condensate and the laser. I give detailed indications of a regime in which the condensate should be observed to guide experimental work and show how to distinguish the Bose condensate from a laser. The second part of this thesis is concerned with properties of excitons and modelling of excitonic lasing in quasi-one-dimensional quantum wires. I develop a very general numerical method of calculating the properties of wires with different shapes and materials. Using this method I study the properties of very wide range of T-shaped quantum wires.

  16. The Chern-Simons current in time series of knots and links in proteins

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Pincak, Richard

    2018-06-01

    A superspace model of knots and links for DNA time series data is proposed to take into account the feedback loop from docking to undocking state of protein-protein interactions. In particular, the direction of interactions between the 8 hidden states of DNA is considered. It is a E8 ×E8 unified spin model where the genotype, from active and inactive side of DNA time data series, can be considered for any living organism. The mathematical model is borrowed from loop-quantum gravity and adapted to biology. It is used to derive equations for gene expression describing transitions from ground to excited states, and for the 8 coupling states between geneon and anti-geneon transposon and retrotransposon in trash DNA. Specifically, we adopt a modified Grothendieck cohomology and a modified Khovanov cohomology for biology. The result is a Chern-Simons current in (8 + 3) extradimensions of a given unoriented supermanifold with ghost fields of protein structures. The 8 dimensions come from the 8 hidden states of spinor field of genetic code. The extradimensions come from the 3 types of principle fiber bundle in the secondary protein.

  17. A 2D nonlinear multiring model for blood flow in large elastic arteries

    NASA Astrophysics Data System (ADS)

    Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves

    2017-12-01

    In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.

  18. The problem of the Grand Unification Theory

    NASA Astrophysics Data System (ADS)

    Treder, H.-J.

    The evolution and fundamental questions of physical theories unifying the gravitational, electromagnetic, and quantum-mechanical interactions are explored, taking Pauli's aphorism as a motto: 'Let no man join what God has cast asunder.' The contributions of Faraday and Riemann, Lorentz, Einstein, and others are discussed, and the criterion of Pauli is applied to Grand Unification Theories (GUT) in general and to those seeking to link gravitation and electromagnetism in particular. Formal mathematical symmetry principles must be shown to have real physical relevance by predicting measurable phenomena not explainable without a GUT; these phenomena must be macroscopic because gravitational effects are to weak to be measured on the microscopic level. It is shown that empirical and theoretical studies of 'gravomagnetism', 'gravoelectricity', or possible links between gravoelectrity and the cosmic baryon assymmetry eventually lead back to basic questions which appear philosophical or purely mathematical but actually challenge physics to seek verifiable answers.

  19. PIM Pedagogy: Toward a Loosely Unified Model for Teaching and Studying Comics and Graphic Novels

    ERIC Educational Resources Information Center

    Carter, James B.

    2015-01-01

    The article debuts and explains "PIM" pedagogy, a construct for teaching comics at the secondary- and post-secondary levels and for deep reading/studying comics. The PIM model for considering comics is actually based in major precepts of education studies, namely constructivist foundations of learning, and loosely unifies constructs…

  20. Integration Defended: Berkeley Unified's Strategy to Maintain School Diversity

    ERIC Educational Resources Information Center

    Chavez, Lisa; Frankenberg, Erica

    2009-01-01

    In June 2007, the Supreme Court limited the tools that school districts could use to voluntarily integrate schools. In the aftermath of the decision, educators around the country have sought models of successful plans that would also be legal. One such model may be Berkeley Unified School District's (BUSD) plan. Earlier this year, the California…

  1. Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Ocampo, Cesar; Senent, Juan S.; Williams, Jacob

    2010-01-01

    The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.

  2. Electron heating in a Monte Carlo model of a high Mach number, supercritical, collisionless shock

    NASA Technical Reports Server (NTRS)

    Ellison, Donald C.; Jones, Frank C.

    1987-01-01

    Preliminary work in the investigation of electron injection and acceleration at parallel shocks is presented. A simple model of electron heating that is derived from a unified shock model which includes the effects of an electrostatic potential jump is described. The unified shock model provides a kinetic description of the injection and acceleration of ions and a fluid description of electron heating at high Mach number, supercritical, and parallel shocks.

  3. Dynamics and function of the tear film in relation to the blink cycle.

    PubMed

    Braun, R J; King-Smith, P E; Begley, C G; Li, Longfei; Gewecke, N R

    2015-03-01

    Great strides have recently been made in quantitative measurements of tear film thickness and thinning, mathematical modeling thereof and linking these to sensory perception. This paper summarizes recent progress in these areas and reports on new results. The complete blink cycle is used as a framework that attempts to unify the results that are currently available. Understanding of tear film dynamics is aided by combining information from different imaging methods, including fluorescence, retroillumination and a new high-speed stroboscopic imaging system developed for studying the tear film during the blink cycle. During the downstroke of the blink, lipid is compressed as a thick layer just under the upper lid which is often released as a narrow thick band of lipid at the beginning of the upstroke. "Rippling" of the tear film/air interface due to motion of the tear film over the corneal surface, somewhat like the flow of water in a shallow stream over a rocky streambed, was observed during lid motion and treated theoretically here. New mathematical predictions of tear film osmolarity over the exposed ocular surface and in tear breakup are presented; the latter is closely linked to new in vivo observations. Models include the effects of evaporation, osmotic flow through the cornea and conjunctiva, quenching of fluorescence, tangential flow of aqueous tears and diffusion of tear solutes and fluorescein. These and other combinations of experiment and theory increase our understanding of the fluid dynamics of the tear film and its potential impact on the ocular surface. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Dynamics and function of the tear film in relation to the blink cycle

    PubMed Central

    Braun, R.J.; King-Smith, P.E.; Begley, C.G.; Li, Longfei; Gewecke, N.R.

    2014-01-01

    Great strides have recently been made in quantitative measurements of tear film thickness and thinning, mathematical modeling thereof and linking these to sensory perception. This paper summarizes recent progress in these areas and reports on new results. The complete blink cycle is used as a framework that attempts to unify the results that are currently available. Understanding of tear film dynamics is aided by combining information from different imaging methods, including fluorescence, retroillumination and a new high-speed stroboscopic imaging system developed for studying the tear film during the blink cycle. During the downstroke of the blink, lipid is compressed as a thick layer just under the upper lid which is often released as a narrow thick band of lipid at the beginning of the upstroke. “Rippling” of the tear film/air interface due to motion of the tear film over the corneal surface, somewhat like the flow of water in a shallow stream over a rocky streambed, was observed during lid motion and treated theoretically here. New mathematical predictions of tear film osmolarity over the exposed ocular surface and in tear breakup are presented; the latter is closely linked to new in vivo observations. Models include the effects of evaporation, osmotic flow through the cornea and conjunctiva, quenching of fluorescence, tangential flow of aqueous tears and diffusion of tear solutes and fluorescein. These and other combinations of experiment and theory increase our understanding of the fluid dynamics of the tear film and its potential impact on the ocular surface. PMID:25479602

  5. Models and methods in delay discounting.

    PubMed

    Tesch, Aaron D; Sanfey, Alan G

    2008-04-01

    Delay discounting (DD) is a term typically used to describe the devaluation of rewards over time, and much research across a wide variety of domains has illustrated that people in general prefer a smaller reward delivered soon as opposed to a larger reward delivered at a later stage. Despite numerous attempts, a single unified model of DD that accounts for the varied pattern of results typically observed has been elusive. One of the difficulties in deriving a unified model is the presence of many framing and context effects, situations in which changing, apparently irrelevant, aspects of the choice scenarios lead to different selections. Additionally, different paradigms of DD research use quite different methodology, which poses challenges for a unified model. This chapter describes some of the difficulties in creating a single DD model and suggests some experiments that would help integrate different paradigms to create a clearer picture of DD.

  6. Unified constitutive material models for nonlinear finite-element structural analysis. [gas turbine engine blades and vanes

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Laflen, J. H.; Lindholm, U. S.

    1985-01-01

    Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.

  7. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    NASA Astrophysics Data System (ADS)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  8. Parametric models to relate spike train and LFP dynamics with neural information processing.

    PubMed

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.

  9. A unified model of the hierarchical and stochastic theories of gastric cancer

    PubMed Central

    Song, Yanjing; Wang, Yao; Tong, Chuan; Xi, Hongqing; Zhao, Xudong; Wang, Yi; Chen, Lin

    2017-01-01

    Gastric cancer (GC) is a life-threatening disease worldwide. Despite remarkable advances in treatments for GC, it is still fatal to many patients due to cancer progression, recurrence and metastasis. Regarding the development of novel therapeutic techniques, many studies have focused on the biological mechanisms that initiate tumours and cause treatment resistance. Tumours have traditionally been considered to result from somatic mutations, either via clonal evolution or through a stochastic model. However, emerging evidence has characterised tumours using a hierarchical organisational structure, with cancer stem cells (CSCs) at the apex. Both stochastic and hierarchical models are reasonable systems that have been hypothesised to describe tumour heterogeneity. Although each model alone inadequately explains tumour diversity, the two models can be integrated to provide a more comprehensive explanation. In this review, we discuss existing evidence supporting a unified model of gastric CSCs, including the regulatory mechanisms of this unified model in addition to the current status of stemness-related targeted therapy in GC patients. PMID:28301871

  10. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  11. New Model of Mobile Learning for the High School Students Preparing for the Unified State Exam

    ERIC Educational Resources Information Center

    Khasianov, Airat; Shakhova, Irina

    2017-01-01

    In this paper we study a new model of mobile learning for the Unified State Exam ("USE") preparation in Russian Federation. "USE"--is the test school graduates need to pass in order to obtain Russian matura. In recent years the efforts teachers put for preparation of their students to the "USE" diminish how well the…

  12. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  13. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  14. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  15. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  16. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  17. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  18. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Wagner, William (Technical Monitor); Cranmer, Steven R.

    2005-01-01

    The paper discusses the following: 1. No-cost Extension. The no-cost extension is required to complete the work on the unified model codes (both hydrodynamic and kinetic Monte Carlo) as described in the initial proposal and previous annual reports. 2. Scientific Accomplishments during the Report Period. We completed a comprehensive model of Alfvtn wave reflection that spans the full distance from the photosphere to the distant heliosphere. 3. Comparison of Accomplishments with Proposed Goals. The proposal contained two specific objectives for Year 3: (1) to complete the unified model code, and (2) to apply it to various kinds of coronal holes (and polar plumes within coronal holes). Although the anticipated route toward these two final goals has changed (see accomplishments 2a and 2b above), they remain the major milestones for the extended period of performance. Accomplishments la and IC were necessary prerequisites for the derivation of "physically relevant transport and mode-coupling terms" for the unified model codes (as stated in the proposal Year 3 goals). We have fulfilled the proposed "core work" to study 4 general types of physical processes; in previous years we studied turbulence, mode coupling (Le., non-WKB reflection), and kinetic wave damping, and accomplishment lb provides the fourth topic: nonlinear steepening.

  19. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    PubMed

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  20. Dark Matter from SUGRA GUTs: mSUGRA, NUSUGRA and Yukawa-unified SUGRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Howard

    2009-09-08

    Gravity-mediated SUSY breaking models with R-parity conservation give rise to dark matter in the universe. I review neutralino dark matter in the minimal supergravity model (mSUGRA), models with non-universal soft SUSY breaking terms (NUSUGRA) which yield a well-tempered neutralino, and models with unified Yukawa couplings at the GUT scale (as may occur in an SO(10) SUSY GUT theory). These latter models have difficulty accomodating neutralino dark matter, but work very well if the dark matter particles are axions and axinos.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, T., E-mail: xietao@ustc.edu.cn; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026; Qin, H.

    A unified ballooning theory, constructed on the basis of two special theories [Zhang et al., Phys. Fluids B 4, 2729 (1992); Y. Z. Zhang and T. Xie, Nucl. Fusion Plasma Phys. 33, 193 (2013)], shows that a weak up-down asymmetric mode structure is normally formed in an up-down symmetric equilibrium; the weak up-down asymmetry in mode structure is the manifestation of non-trivial higher order effects beyond the standard ballooning equation. It is shown that the asymmetric mode may have even higher growth rate than symmetric modes. The salient features of the theory are illustrated by investigating a fluid model formore » the ion temperature gradient (ITG) mode. The two dimensional (2D) analytical form of the ITG mode, solved in ballooning representation, is then converted into the radial-poloidal space to provide the natural boundary condition for solving the 2D mathematical local eigenmode problem. We find that the analytical expression of the mode structure is in a good agreement with finite difference solution. This sets a reliable framework for quasi-linear computation.« less

  2. Transmembrane Helices Tilt, Bend, Slide, Torque, and Unwind between Functional States of Rhodopsin

    PubMed Central

    Ren, Zhong; Ren, Peter X.; Balusu, Rohith; Yang, Xiaojing

    2016-01-01

    The seven-helical bundle of rhodopsin and other G-protein coupled receptors undergoes structural rearrangements as the transmembrane receptor protein is activated. These structural changes are known to involve tilting and bending of various transmembrane helices. However, the cause and effect relationship among structural events leading to a cytoplasmic crevasse for G-protein binding is less well defined. Here we present a mathematical model of the protein helix and a simple procedure to determine multiple parameters that offer precise depiction of a helical conformation. A comprehensive survey of bovine rhodopsin structures shows that the helical rearrangements during the activation of rhodopsin involve a variety of angular and linear motions such as torsion, unwinding, and sliding in addition to the previously reported tilting and bending. These hitherto undefined motion components unify the results obtained from different experimental approaches, and demonstrate conformational similarity between the active opsin structure and the photoactivated structures in crystallo near the retinal anchor despite their marked differences. PMID:27658480

  3. Fitting Multimeric Protein Complexes into Electron Microscopy Maps Using 3D Zernike Descriptors

    PubMed Central

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-01-01

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root mean square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases. PMID:22417139

  4. Fitting multimeric protein complexes into electron microscopy maps using 3D Zernike descriptors.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-06-14

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three-dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root-mean-square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases.

  5. Pattern formation in mass conserving reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Brauns, Fridtjof; Halatek, Jacob; Frey, Erwin

    We present a rigorous theoretical framework able to generalize and unify pattern formation for quantitative mass conserving reaction-diffusion models. Mass redistribution controls chemical equilibria locally. Separation of diffusive mass redistribution on the level of conserved species provides a general mathematical procedure to decompose complex reaction-diffusion systems into effectively independent functional units, and to reveal the general underlying bifurcation scenarios. We apply this framework to Min protein pattern formation and identify the mechanistic roles of both involved protein species. MinD generates polarity through phase separation, whereas MinE takes the role of a control variable regulating the existence of MinD phases. Hence, polarization and not oscillations is the generic core dynamics of Min proteins in vivo. This establishes an intrinsic mechanistic link between the Min system and a broad class of intracellular pattern forming systems based on bistability and phase separation (wave-pinning). Oscillations are facilitated by MinE redistribution and can be understood mechanistically as relaxation oscillations of the polarization direction.

  6. Development of a unified constitutive model for an isotropic nickel base superalloy Rene 80

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V. G.; Vanstone, R. H.; Laflen, J. H.; Stouffer, D. C.

    1988-01-01

    Accurate analysis of stress-strain behavior is of critical importance in the evaluation of life capabilities of hot section turbine engine components such as turbine blades and vanes. The constitutive equations used in the finite element analysis of such components must be capable of modeling a variety of complex behavior exhibited at high temperatures by cast superalloys. The classical separation of plasticity and creep employed in most of the finite element codes in use today is known to be deficient in modeling elevated temperature time dependent phenomena. Rate dependent, unified constitutive theories can overcome many of these difficulties. A new unified constitutive theory was developed to model the high temperature, time dependent behavior of Rene' 80 which is a cast turbine blade and vane nickel base superalloy. Considerations in model development included the cyclic softening behavior of Rene' 80, rate independence at lower temperatures and the development of a new model for static recovery.

  7. A Unified Framework for Analyzing and Designing for Stationary Arterial Networks

    DOT National Transportation Integrated Search

    2017-05-17

    This research aims to develop a unified theoretical and simulation framework for analyzing and designing signals for stationary arterial networks. Existing traffic flow models used in design and analysis of signal control strategies are either too si...

  8. A combined model for pseudo-rapidity distributions in Cu-Cu collisions at BNL-RHIC energies

    NASA Astrophysics Data System (ADS)

    Jiang, Z. J.; Wang, J.; Huang, Y.

    2016-04-01

    The charged particles produced in nucleus-nucleus collisions come from leading particles and those frozen out from the hot and dense matter created in collisions. The leading particles are conventionally supposed having Gaussian rapidity distributions normalized to the number of participants. The hot and dense matter is assumed to expand according to the unified hydrodynamics, a hydro model which unifies the features of Landau and Hwa-Bjorken model, and freeze out into charged particles from a time-like hypersurface with a proper time of τFO. The rapidity distribution of this part of charged particles can be derived analytically. The combined contribution from both leading particles and unified hydrodynamics is then compared against the experimental data performed by BNL-RHIC-PHOBOS Collaboration in different centrality Cu-Cu collisions at sNN = 200 and 62.4GeV, respectively. The model predictions are consistent with experimental measurements.

  9. A unified approach to computer analysis and modeling of spacecraft environmental interactions

    NASA Technical Reports Server (NTRS)

    Katz, I.; Mandell, M. J.; Cassidy, J. J.

    1986-01-01

    A new, coordinated, unified approach to the development of spacecraft plasma interaction models is proposed. The objective is to eliminate the unnecessary duplicative work in order to allow researchers to concentrate on the scientific aspects. By streamlining the developmental process, the interchange between theories and experimentalists is enhanced, and the transfer of technology to the spacecraft engineering community is faster. This approach is called the UNIfied Spacecraft Interaction Model (UNISIM). UNISIM is a coordinated system of software, hardware, and specifications. It is a tool for modeling and analyzing spacecraft interactions. It will be used to design experiments, to interpret results of experiments, and to aid in future spacecraft design. It breaks a Spacecraft Ineraction analysis into several modules. Each module will perform an analysis for some physical process, using phenomenology and algorithms which are well documented and have been subject to review. This system and its characteristics are discussed.

  10. The transport of drug in fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir

    2016-07-01

    The topic of the review article [1] is the derivation of a multiscale paradigm for the modeling of fibrosis. Firstly, the biological process of the physiological and pathological fibrosis including therapeutical actions is reviewed. Fibrosis can be a consequence of tissue damage, infections and autoimmune diseases, foreign material, tumors. Some questions regarding the pathogenesis, progression and possible regression of fibrosis are lacking. At each scale of observation, different theoretical tools coming from computational, mathematical and physical biology have been proposed. However a complete framework that takes into account the different mechanisms occurring at different scales is still missing. Therefore with the main aim to define a multiscale approach for the modeling of fibrosis, the authors of [1] have presented different top-down and bottom-up approaches that have been developed in the literature. Specifically, their description refers to models for fibrosis diseases based on ordinary and partial differential equation, agents [2], thermostatted kinetic theory [3-5], coarse-grained structures [6-8] and constitutive laws for fibrous collagen networks [9]. A critical analysis has been addressed for all frameworks discussed in the paper. Open problems and future research directions referring to both biological and modeling insight of fibrosis are presented. The paper concludes with the ambitious aim of a multiscale model.

  11. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  12. Preliminary Development of a Unified Viscoplastic Constitutive Model for Alloy 617 with Special Reference to Long Term Creep Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sham, Sam; Walker, Kevin P.

    The expected service life of the Next Generation Nuclear Plant is 60 years. Structural analyses of the Intermediate Heat Exchanger (IHX) will require the development of unified viscoplastic constitutive models that address the material behavior of Alloy 617, a construction material of choice, over a wide range of strain rates. Many unified constitutive models employ a yield stress state variable which is used to account for cyclic hardening and softening of the material. For low stress values below the yield stress state variable these constitutive models predict that no inelastic deformation takes place which is contrary to experimental results. Themore » ability to model creep deformation at low stresses for the IHX application is very important as the IHX operational stresses are restricted to very small values due to the low creep strengths at elevated temperatures and long design lifetime. This paper presents some preliminary work in modeling the unified viscoplastic constitutive behavior of Alloy 617 which accounts for the long term, low stress, creep behavior and the hysteretic behavior of the material at elevated temperatures. The preliminary model is presented in one-dimensional form for ease of understanding, but the intent of the present work is to produce a three-dimensional model suitable for inclusion in the user subroutines UMAT and USERPL of the ABAQUS and ANSYS nonlinear finite element codes. Further experiments and constitutive modeling efforts are planned to model the material behavior of Alloy 617 in more detail.« less

  13. Detecting perceptual groupings in textures by continuity considerations

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.

    1990-01-01

    A generalization is presented for the second derivative of a Gaussian D(sup 2)G operator to apply to problems of perceptual organization involving textures. Extensions to other problems of perceptual organization are evident and a new research direction can be established. The technique presented is theoretically pleasing since it has the potential of unifying the entire area of image segmentation under the mathematical notion of continuity and presents a single algorithm to form perceptual groupings where many algorithms existed previously. The eventual impact on both the approach and technique of image processing segmentation operations could be significant.

  14. An accessible four-dimensional treatment of Maxwell's equations in terms of differential forms

    NASA Astrophysics Data System (ADS)

    Sá, Lucas

    2017-03-01

    Maxwell’s equations are derived in terms of differential forms in the four-dimensional Minkowski representation, starting from the three-dimensional vector calculus differential version of these equations. Introducing all the mathematical and physical concepts needed (including the tool of differential forms), using only knowledge of elementary vector calculus and the local vector version of Maxwell’s equations, the equations are reduced to a simple and elegant set of two equations for a unified quantity, the electromagnetic field. The treatment should be accessible for students taking a first course on electromagnetism.

  15. A unified 3D default space consciousness model combining neurological and physiological processes that underlie conscious experience

    PubMed Central

    Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.

    2015-01-01

    The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573

  16. A unified model explains commonness and rarity on coral reefs.

    PubMed

    Connolly, Sean R; Hughes, Terry P; Bellwood, David R

    2017-04-01

    Abundance patterns in ecological communities have important implications for biodiversity maintenance and ecosystem functioning. However, ecological theory has been largely unsuccessful at capturing multiple macroecological abundance patterns simultaneously. Here, we propose a parsimonious model that unifies widespread ecological relationships involving local aggregation, species-abundance distributions, and species associations, and we test this model against the metacommunity structure of reef-building corals and coral reef fishes across the western and central Pacific. For both corals and fishes, the unified model simultaneously captures extremely well local species-abundance distributions, interspecific variation in the strength of spatial aggregation, patterns of community similarity, species accumulation, and regional species richness, performing far better than alternative models also examined here and in previous work on coral reefs. Our approach contributes to the development of synthetic theory for large-scale patterns of community structure in nature, and to addressing ongoing challenges in biodiversity conservation at macroecological scales. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  17. Concentration-driven models revisited: towards a unified framework to model settling tanks in water resource recovery facilities.

    PubMed

    Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar

    2017-02-01

    A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.

  18. Addressing Learning Style Criticism: The Unified Learning Style Model Revisited

    NASA Astrophysics Data System (ADS)

    Popescu, Elvira

    Learning style is one of the individual differences that play an important but controversial role in the learning process. This paper aims at providing a critical analysis regarding learning styles and their use in technology enhanced learning. The identified criticism issues are addressed by reappraising the so called Unified Learning Style Model (ULSM). A detailed description of the ULSM components is provided, together with their rationale. The practical applicability of the model in adaptive web-based educational systems and its advantages versus traditional learning style models are also outlined.

  19. A unified view of acoustic-electrostatic solitons in complex plasmas

    NASA Astrophysics Data System (ADS)

    McKenzie, J. F.; Doyle, T. B.

    2003-03-01

    A fluid dynamic approach is used in a unified fully nonlinear treatment of the properties of the dust-acoustic, ion-acoustic and Langmuir-acoustic solitons. The analysis, which is carried out in the wave frame of the soliton, is based on total momentum conservation and Bernoulli-like energy equations for each of the particle species in each wave type, and yields the structure equation for the `heavy' species flow speed in each case. The heavy (cold or supersonic) species is always compressed in the soliton, requiring concomitant contraints on the potential and on the flow speed of the electrons and protons in the wave. The treatment clearly elucidates the crucial role played by the heavy species sonic point in limiting the collective species Mach number, which determines the upper limit for the existence of the soliton and its amplitude, and also shows the essentially similar nature of each soliton type. An exact solution, which highlights these characteristic properties, shows that the three acoustic solitons are in fact the same mathematical entity in different physical disguises.

  20. Unified synchronization criteria in an array of coupled neural networks with hybrid impulses.

    PubMed

    Wang, Nan; Li, Xuechen; Lu, Jianquan; Alsaadi, Fuad E

    2018-05-01

    This paper investigates the problem of globally exponential synchronization of coupled neural networks with hybrid impulses. Two new concepts on average impulsive interval and average impulsive gain are proposed to deal with the difficulties coming from hybrid impulses. By employing the Lyapunov method combined with some mathematical analysis, some efficient unified criteria are obtained to guarantee the globally exponential synchronization of impulsive networks. Our method and criteria are proved to be effective for impulsively coupled neural networks simultaneously with synchronizing impulses and desynchronizing impulses, and we do not need to discuss these two kinds of impulses separately. Moreover, by using our average impulsive interval method, we can obtain an interesting and valuable result for the case of average impulsive interval T a =∞. For some sparse impulsive sequences with T a =∞, the impulses can happen for infinite number of times, but they do not have essential influence on the synchronization property of networks. Finally, numerical examples including scale-free networks are exploited to illustrate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnowitt, R.; Nath, P.

    A survey is given of supersymmetry and supergravity and their phenomenology. Some of the topics discussed are the basic ideas of global supersymmetry, the minimal supersymmetric Standard Model (MSSM) and its phenomenology, the basic ideas of local supersymmetry (supergravity), grand unification, supersymmetry breaking in supergravity grand unified models, radiative breaking of SU(2) {times} U(1), proton decay, cosmological constraints, and predictions of supergravity grand unified models. While the number of detailed derivations are necessarily limited, a sufficient number of results are given so that a reader can get a working knowledge of this field.

  2. Finite element implementation of Robinson's unified viscoplastic model and its application to some uniaxial and multiaxial problems

    NASA Technical Reports Server (NTRS)

    Arya, V. K.; Kaufman, A.

    1989-01-01

    A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.

  3. Finite element implementation of Robinson's unified viscoplastic model and its application to some uniaxial and multiaxial problems

    NASA Technical Reports Server (NTRS)

    Arya, V. K.; Kaufman, A.

    1987-01-01

    A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.

  4. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  5. Effects of Positive Unified Behavior Support on Instruction

    ERIC Educational Resources Information Center

    Scott, John S.; White, Richard; Algozzine, Bob; Algozzine, Kate

    2009-01-01

    "Positive Unified Behavior Support" (PUBS) is a school-wide intervention designed to establish uniform attitudes, expectations, correction procedures, and roles among faculty, staff, and administration. PUBS is grounded in the general principles of positive behavior support and represents a straightforward, practical implementation model. When…

  6. A unified architecture for biomedical search engines based on semantic web technologies.

    PubMed

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  7. BRDF profile of Tyvek and its implementation in the Geant4 simulation toolkit.

    PubMed

    Nozka, Libor; Pech, Miroslav; Hiklova, Helena; Mandat, Dusan; Hrabovsky, Miroslav; Schovanek, Petr; Palatka, Miroslav

    2011-02-28

    Diffuse and specular characteristics of the Tyvek 1025-BL material are reported with respect to their implementation in the Geant4 Monte Carlo simulation toolkit. This toolkit incorporates the UNIFIED model. Coefficients defined by the UNIFIED model were calculated from the bidirectional reflectance distribution function (BRDF) profiles measured with a scatterometer for several angles of incidence. Results were amended with profile measurements made by a profilometer.

  8. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  9. Unified Mie and fractal scattering by cells and experimental study on application in optical characterization of cellular and subcellular structures.

    PubMed

    Xu, Min; Wu, Tao T; Qu, Jianan Y

    2008-01-01

    A unified Mie and fractal model for light scattering by biological cells is presented. This model is shown to provide an excellent global agreement with the angular dependent elastic light scattering spectroscopy of cells over the whole visible range (400 to 700 nm) and at all scattering angles (1.1 to 165 deg) investigated. Mie scattering from the bare cell and the nucleus is found to dominate light scattering in the forward directions, whereas the random fluctuation of the background refractive index within the cell, behaving as a fractal random continuous medium, is found to dominate light scattering at other angles. Angularly dependent elastic light scattering spectroscopy aided by the unified Mie and fractal model is demonstrated to be an effective noninvasive approach to characterize biological cells and their internal structures. The acetowhitening effect induced by applying acetic acid on epithelial cells is investigated as an example. The changes in morphology and refractive index of epithelial cells, nuclei, and subcellular structures after the application of acetic acid are successfully probed and quantified using the proposed approach. The unified Mie and fractal model may serve as the foundation for optical detection of precancerous and cancerous changes in biological cells and tissues based on light scattering techniques.

  10. Evolution and mass extinctions as lognormal stochastic processes

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-10-01

    In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra-terrestrial civilizations existing in the Galaxy (as a consequence of the central limit theorem of statistics). (5) But the most striking new result is that the well-known `Molecular Clock of Evolution', namely the `constant rate of Evolution at the molecular level' as shown by Kimura's Neutral Theory of Molecular Evolution, identifies with growth rate of the entropy of our Evo-SETI model, because they both grew linearly in time since the origin of life. (6) Furthermore, we apply our Evo-SETI model to lognormal stochastic processes other than GBMs. For instance, we provide two models for the mass extinctions that occurred in the past: (a) one based on GBMs and (b) the other based on a parabolic mean value capable of covering both the extinction and the subsequent recovery of life forms. (7) Finally, we show that the Markov & Korotayev (2007, 2008) model for Darwinian Evolution identifies with an Evo-SETI model for which the mean value of the underlying lognormal stochastic process is a cubic function of the time. In conclusion: we have provided a new mathematical model capable of embracing molecular evolution, SETI and entropy into a simple set of statistical equations based upon b-lognormals and lognormal stochastic processes with arbitrary mean, of which the GBMs are the particular case of exponential growth.

  11. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    PubMed Central

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567

  12. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  13. Grid cell hexagonal patterns formed by fast self-organized learning within entorhinal cortex.

    PubMed

    Mhatre, Himanshu; Gorchetchnikov, Anatoli; Grossberg, Stephen

    2012-02-01

    Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. It has previously been shown how a self-organizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? This article describes a simple and general mathematical property of the trigonometry of spatial navigation which favors hexagonal patterns. The article also develops a neural model that can learn to exploit this trigonometric relationship. This GRIDSmap self-organizing map model converts path integration signals into hexagonal grid cell patterns of multiple scales. GRIDSmap creates only grid cell firing patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support an emerging unified computational framework based on a hierarchy of self-organizing maps for explaining how entorhinal-hippocampal interactions support spatial navigation. Copyright © 2010 Wiley Periodicals, Inc.

  14. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  15. SO(10) × S 4 grand unified theory of flavour and leptogenesis

    NASA Astrophysics Data System (ADS)

    de Anda, Francisco J.; King, Stephen F.; Perdomo, Elena

    2017-12-01

    We propose a Grand Unified Theory of Flavour, based on SO(10) together with a non-Abelian discrete group S 4, under which the unified three quark and lepton 16-plets are unified into a single triplet 3'. The model involves a further discrete group ℤ 4 R × ℤ 4 3 which controls the Higgs and flavon symmetry breaking sectors. The CSD2 flavon vacuum alignment is discussed, along with the GUT breaking potential and the doublet-triplet splitting, and proton decay is shown to be under control. The Yukawa matrices are derived in detail, from renormalisable diagrams, and neutrino masses emerge from the type I seesaw mechanism. A full numerical fit is performed with 15 input parameters generating 19 presently constrained observables, taking into account supersymmetry threshold corrections. The model predicts a normal neutrino mass ordering with a CP oscillation phase of 260°, an atmospheric angle in the first octant and neutrinoless double beta decay with m ββ = 11 meV. We discuss N 2 leptogenesis, which fixes the second right-handed neutrino mass to be M 2 ≃ 2 × 1011 GeV, in the natural range predicted by the model.

  16. The unified model of vegetarian identity: A conceptual framework for understanding plant-based food choices.

    PubMed

    Rosenfeld, Daniel L; Burrow, Anthony L

    2017-05-01

    By departing from social norms regarding food behaviors, vegetarians acquire membership in a distinct social group and can develop a salient vegetarian identity. However, vegetarian identities are diverse, multidimensional, and unique to each individual. Much research has identified fundamental psychological aspects of vegetarianism, and an identity framework that unifies these findings into common constructs and conceptually defines variables is needed. Integrating psychological theories of identity with research on food choices and vegetarianism, this paper proposes a conceptual model for studying vegetarianism: The Unified Model of Vegetarian Identity (UMVI). The UMVI encompasses ten dimensions-organized into three levels (contextual, internalized, and externalized)-that capture the role of vegetarianism in an individual's self-concept. Contextual dimensions situate vegetarianism within contexts; internalized dimensions outline self-evaluations; and externalized dimensions describe enactments of identity through behavior. Together, these dimensions form a coherent vegetarian identity, characterizing one's thoughts, feelings, and behaviors regarding being vegetarian. By unifying dimensions that capture psychological constructs universally, the UMVI can prevent discrepancies in operationalization, capture the inherent diversity of vegetarian identities, and enable future research to generate greater insight into how people understand themselves and their food choices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Grand unified brane world scenario

    NASA Astrophysics Data System (ADS)

    Arai, Masato; Blaschke, Filip; Eto, Minoru; Sakai, Norisuke

    2017-12-01

    We present a field theoretical model unifying grand unified theory (GUT) and brane world scenario. As a concrete example, we consider S U (5 ) GUT in 4 +1 dimensions where our 3 +1 dimensional spacetime spontaneously arises on five domain walls. A field-dependent gauge kinetic term is used to localize massless non-Abelian gauge fields on the domain walls and to assure the charge universality of matter fields. We find the domain walls with the symmetry breaking S U (5 )→S U (3 )×S U (2 )×U (1 ) as a global minimum and all the undesirable moduli are stabilized with the mass scale of MGUT. Profiles of massless standard model particles are determined as a consequence of wall dynamics. The proton decay can be exponentially suppressed.

  18. A patient-specific computational model of hypoxia-modulated radiation resistance in glioblastoma using 18F-FMISO-PET

    PubMed Central

    Rockne, Russell C.; Trister, Andrew D.; Jacobs, Joshua; Hawkins-Daarud, Andrea J.; Neal, Maxwell L.; Hendrickson, Kristi; Mrugala, Maciej M.; Rockhill, Jason K.; Kinahan, Paul; Krohn, Kenneth A.; Swanson, Kristin R.

    2015-01-01

    Glioblastoma multiforme (GBM) is a highly invasive primary brain tumour that has poor prognosis despite aggressive treatment. A hallmark of these tumours is diffuse invasion into the surrounding brain, necessitating a multi-modal treatment approach, including surgery, radiation and chemotherapy. We have previously demonstrated the ability of our model to predict radiographic response immediately following radiation therapy in individual GBM patients using a simplified geometry of the brain and theoretical radiation dose. Using only two pre-treatment magnetic resonance imaging scans, we calculate net rates of proliferation and invasion as well as radiation sensitivity for a patient's disease. Here, we present the application of our clinically targeted modelling approach to a single glioblastoma patient as a demonstration of our method. We apply our model in the full three-dimensional architecture of the brain to quantify the effects of regional resistance to radiation owing to hypoxia in vivo determined by [18F]-fluoromisonidazole positron emission tomography (FMISO-PET) and the patient-specific three-dimensional radiation treatment plan. Incorporation of hypoxia into our model with FMISO-PET increases the model–data agreement by an order of magnitude. This improvement was robust to our definition of hypoxia or the degree of radiation resistance quantified with the FMISO-PET image and our computational model, respectively. This work demonstrates a useful application of patient-specific modelling in personalized medicine and how mathematical modelling has the potential to unify multi-modality imaging and radiation treatment planning. PMID:25540239

  19. Analytical and numerical solutions of the potential and electric field generated by different electrode arrays in a tumor tissue under electrotherapy.

    PubMed

    Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M

    2011-09-24

    Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.

  20. Short Range Tests of Gravity

    NASA Astrophysics Data System (ADS)

    Cardenas, Crystal; Harter, Andrew; Hoyle, C. D.; Leopardi, Holly; Smith, David

    2014-03-01

    Gravity was the first force to be described mathematically, yet it is the only fundamental force not well understood. The Standard Model of quantum mechanics describes interactions between the fundamental strong, weak and electromagnetic forces while Einstein's theory of General Relativity (GR) describes the fundamental force of gravity. There is yet to be a theory that unifies inconsistencies between GR and quantum mechanics. Scenarios of String Theory predicting more than three spatial dimensions also predict physical effects of gravity at sub-millimeter levels that would alter the gravitational inverse-square law. The Weak Equivalence Principle (WEP), a central feature of GR, states that all objects are accelerated at the same rate in a gravitational field independent of their composition. A violation of the WEP at any length would be evidence that current models of gravity are incorrect. At the Humboldt State University Gravitational Research Laboratory, an experiment is being developed to observe gravitational interactions below the 50-micron distance scale. The experiment measures the twist of a parallel-plate torsion pendulum as an attractor mass is oscillated within 50 microns of the pendulum, providing time varying gravitational torque on the pendulum. The size and distance dependence of the torque amplitude provide means to determine deviations from accepted models of gravity on untested distance scales. undergraduate.

  1. From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''

    NASA Astrophysics Data System (ADS)

    Bergeron, H.

    2001-09-01

    Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].

  2. Simulation of Aerosols and Chemistry with a Unified Global Model

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2004-01-01

    This project is to continue the development of the global simulation capabilities of tropospheric and stratospheric chemistry and aerosols in a unified global model. This is a part of our overall investigation of aerosol-chemistry-climate interaction. In the past year, we have enabled the tropospheric chemistry simulations based on the GEOS-CHEM model, and added stratospheric chemical reactions into the GEOS-CHEM such that a globally unified troposphere-stratosphere chemistry and transport can be simulated consistently without any simplifications. The tropospheric chemical mechanism in the GEOS-CHEM includes 80 species and 150 reactions. 24 tracers are transported, including O3, NOx, total nitrogen (NOy), H2O2, CO, and several types of hydrocarbon. The chemical solver used in the GEOS-CHEM model is a highly accurate sparse-matrix vectorized Gear solver (SMVGEAR). The stratospheric chemical mechanism includes an additional approximately 100 reactions and photolysis processes. Because of the large number of total chemical reactions and photolysis processes and very different photochemical regimes involved in the unified simulation, the model demands significant computer resources that are currently not practical. Therefore, several improvements will be taken, such as massive parallelization, code optimization, or selecting a faster solver. We have also continued aerosol simulation (including sulfate, dust, black carbon, organic carbon, and sea-salt) in the global model to cover most of year 2002. These results have been made available to many groups worldwide and accessible from the website http://code916.gsfc.nasa.gov/People/Chin/aot.html.

  3. Lowering the Barrier to Cross-Disciplinary Scientific Data Access via a Brokering Service Built Around a Unified Data Model

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Wilson, A.

    2012-12-01

    The steps many scientific data users go through to use data (after discovering it) can be rather tedious, even when dealing with datasets within their own discipline. Accessing data across domains often seems intractable. We present here, LaTiS, an Open Source brokering solution that bridges the gap between the source data and the user's code by defining a unified data model plus a plugin framework for "adapters" to read data from their native source, "filters" to perform server side data processing, and "writers" to output any number of desired formats or streaming protocols. A great deal of work is being done in the informatics community to promote multi-disciplinary science with a focus on search and discovery based on metadata - information about the data. The goal of LaTiS is to go that last step to provide a uniform interface to read the dataset into computer programs and other applications once it has been identified. The LaTiS solution for integrating a wide variety of data models is to return to mathematical fundamentals. The LaTiS data model emphasizes functional relationships between variables. For example, a time series of temperature measurements can be thought of as a function that maps a time to a temperature. With just three constructs: "Scalar" for a single variable, "Tuple" for a collection of variables, and "Function" to represent a set of independent and dependent variables, the LaTiS data model can represent most scientific datasets at a low level that enables uniform data access. Higher level abstractions can be built on top of the basic model to add more meaningful semantics for specific user communities. LaTiS defines its data model in terms of the Unified Modeling Language (UML). It also defines a very thin Java Interface that can be implemented by numerous existing data interfaces (e.g. NetCDF-Java) such that client code can access any dataset via the Java API, independent of the underlying data access mechanism. LaTiS also provides a reference implementation of the data model and server framework (with a RESTful service interface) in the Scala programming language. Scala can be thought of as the next generation of Java. It runs on the Java Virtual Machine and can directly use Java code. Scala improves upon Java's object-oriented capabilities and adds support for functional programming paradigms which are particularly well suited for scientific data analysis. The Scala implementation of LaTiS can be thought of as a Domain Specific Language (DSL) which presents an API that better matches the semantics of the problems scientific data users are trying to solve. Instead of working with bytes, ints, or arrays, the data user can directly work with data as "time series" or "spectra". LaTiS provides many layers of abstraction with which users can interact to support a wide variety of data access and analysis needs.

  4. Parameterisation of Orographic Cloud Dynamics in a GCM

    DTIC Science & Technology

    2007-01-01

    makes use of both satellite observations of a case study, and a simulation in which the Unified Model is nudged to- wards ERA-40 assimilated winds...this parameterisation makes use of both satellite observations of a case study, and a simulation in which the Unified Model is nudged towards ERA-40...by ANSI Std Z39-18 et al. (1999), predicted the temperature perturbations in the lower stratosphere which can influence polar stratospheric clouds

  5. Unified concept of effective one component plasma for hot dense plasmas

    DOE PAGES

    Clerouin, Jean; Arnault, Philippe; Ticknor, Christopher; ...

    2016-03-17

    Orbital-free molecular dynamics simulations are used to benchmark two popular models for hot dense plasmas: the one component plasma (OCP) and the Yukawa model. A unified concept emerges where an effective OCP (EOCP) is constructed from the short-range structure of the plasma. An unambiguous ionization and the screening length can be defined and used for a Yukawa system, which reproduces the long-range structure with finite compressibility. Similarly, the dispersion relation of longitudinal waves is consistent with the screened model at vanishing wave number but merges with the OCP at high wave number. Additionally, the EOCP reproduces the overall relaxation timemore » scales of the correlation functions associated with ionic motion. Lastly, in the hot dense regime, this unified concept of EOCP can be fruitfully applied to deduce properties such as the equation of state, ionic transport coefficients, and the ion feature in x-ray Thomson scattering experiments.« less

  6. A Unified Model for Predicting the Open Hole Tensile and Compressive Strengths of Composite Laminates for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Paul; Pineda, Evan J.; Heinrich, Christian; Waas, Anthony M.

    2013-01-01

    The open hole tensile and compressive strengths are important design parameters in qualifying fiber reinforced laminates for a wide variety of structural applications in the aerospace industry. In this paper, we present a unified model that can be used for predicting both these strengths (tensile and compressive) using the same set of coupon level, material property data. As a prelude to the unified computational model that follows, simplified approaches, referred to as "zeroth order", "first order", etc. with increasing levels of fidelity are first presented. The results and methods presented are practical and validated against experimental data. They serve as an introductory step in establishing a virtual building block, bottom-up approach to designing future airframe structures with composite materials. The results are useful for aerospace design engineers, particularly those that deal with airframe design.

  7. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  8. A Unified Fault-Tolerance Protocol

    NASA Technical Reports Server (NTRS)

    Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey

    2004-01-01

    Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.

  9. Mathematization Competencies of Pre-Service Elementary Mathematics Teachers in the Mathematical Modelling Process

    ERIC Educational Resources Information Center

    Yilmaz, Suha; Tekin-Dede, Ayse

    2016-01-01

    Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…

  10. Toward a Unified Science Curriculum.

    ERIC Educational Resources Information Center

    Showalter, Victor M.

    The two major models of science curriculum change, textbook revision and national curriculum projects, are derived from, and reinforce, the present curriculum structure. This is undesirable in a time of increasing fluidity and change, because adaptation to new situations is difficult. Unified science, based on the premise that science is a unity,…

  11. In Search of a Unified Model of Language Contact

    ERIC Educational Resources Information Center

    Winford, Donald

    2013-01-01

    Much previous research has pointed to the need for a unified framework for language contact phenomena -- one that would include social factors and motivations, structural factors and linguistic constraints, and psycholinguistic factors involved in processes of language processing and production. While Contact Linguistics has devoted a great deal…

  12. Gapless edges of 2d topological orders and enriched monoidal categories

    NASA Astrophysics Data System (ADS)

    Kong, Liang; Zheng, Hao

    2018-02-01

    In this work, we give a mathematical description of a chiral gapless edge of a 2d topological order (without symmetry). We show that the observables on the 1+1D world sheet of such an edge consist of a family of topological edge excitations, boundary CFT's and walls between boundary CFT's. These observables can be described by a chiral algebra and an enriched monoidal category. This mathematical description automatically includes that of gapped edges as special cases. Therefore, it gives a unified framework to study both gapped and gapless edges. Moreover, the boundary-bulk duality also holds for gapless edges. More precisely, the unitary modular tensor category that describes the 2d bulk phase is exactly the Drinfeld center of the enriched monoidal category that describes the gapless/gapped edge. We propose a classification of all gapped and chiral gapless edges of a given bulk phase. In the end, we explain how modular-invariant bulk rational conformal field theories naturally emerge on certain gapless walls between two trivial phases.

  13. A unified account of tilt illusions, association fields, and contour detection based on elastica.

    PubMed

    Keemink, Sander W; van Rossum, Mark C W

    2016-09-01

    As expressed in the Gestalt law of good continuation, human perception tends to associate stimuli that form smooth continuations. Contextual modulation in primary visual cortex, in the form of association fields, is believed to play an important role in this process. Yet a unified and principled account of the good continuation law on the neural level is lacking. In this study we introduce a population model of primary visual cortex. Its contextual interactions depend on the elastica curvature energy of the smoothest contour connecting oriented bars. As expected, this model leads to association fields consistent with data. However, in addition the model displays tilt-illusions for stimulus configurations with grating and single bars that closely match psychophysics. Furthermore, the model explains not only pop-out of contours amid a variety of backgrounds, but also pop-out of single targets amid a uniform background. We thus propose that elastica is a unifying principle of the visual cortical network. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems

    NASA Astrophysics Data System (ADS)

    de Almeida, André LF; Favier, Gérard

    2013-12-01

    This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.

  15. Topology of an intracellular transduction chain (phototropism of Phycomyces): 1. Joint review of functional, temporal, and spatial aspects.

    PubMed

    Wenzler, D; Reinhardt, M; Fukshansky, L

    2001-08-21

    Two light-induced growth reactions in a unicellular cylindrical sporangiophore of Phycomyces blakesleeanus-vertical growth acceleration under symmetrical irradiation (photomecism) and directional growth under unilateral irradiation (phototropism)-share common input light perception as well as common output growth mechanism but have strongly divergent dynamics and other distinctive features. This divergence culminates in the phototropic paradoxes the main of which states that photomecism shows total adaptation, while phototropism does not adapt. The basis for this contradiction is that the phototropic transduction chain, unlike that of photomecism, faces a spatially non-uniform stimulus and processes a series of spatial patterns (light and absorption profiles, adaptation profile, etc.). The only way to resolve the paradoxes and correlate features of both responses within a single transduction chain is to assume non-local signal transduction, e.g. a cross-talk between different azimuthal locations within the cylindrical cell. On the other hand, to establish the presence of an appropriate cross-talk is equivalent of gaining insight into the topology of the transduction chain. This series of two papers contains a review reconsidering the entire field from this viewpoint (Paper 1) and a mathematical model of pattern transduction which unifies features of phototropism and resolves the paradoxes (Paper 2). At the same time, this is the first "proof of concept" for the "activity/pooling (a/p) networks"-a specific mathematical apparatus designed to analyse systemic properties and control in metabolic pathways. Copyright 2001 Academic Press.

  16. Measuring and predicting sooting tendencies of oxygenates, alkanes, alkenes, cycloalkanes, and aromatics on a unified scale

    DOE PAGES

    Das, Dhrubajyoti D.; St. John, Peter C.; McEnally, Charles S.; ...

    2017-12-27

    Databases of sooting indices, based on measuring some aspect of sooting behavior in a standardized combustion environment, are useful in providing information on the comparative sooting tendencies of different fuels or pure compounds. However, newer biofuels have varied chemical structures including both aromatic and oxygenated functional groups, which expands the chemical space of relevant compounds. In this work, we propose a unified sooting tendency database for pure compounds, including both regular and oxygenated hydrocarbons, which is based on combining two disparate databases of yield-based sooting tendency measurements in the literature. Unification of the different databases was made possible by leveragingmore » the greater dynamic range of the color ratio pyrometry soot diagnostic. This unified database contains a substantial number of pure compounds (≥ 400 total) from multiple categories of hydrocarbons important in modern fuels and establishes the sooting tendencies of aromatic and oxygenated hydrocarbons on the same numeric scale for the first time. Then, using this unified sooting tendency database, we have developed a predictive model for sooting behavior applicable to a broad range of hydrocarbons and oxygenated hydrocarbons. The model decomposes each compound into single-carbon fragments and assigns a sooting tendency contribution to each fragment based on regression against the unified database. The model’s predictive accuracy (as demonstrated by leave-one-out cross-validation) is comparable to a previously developed, more detailed predictive model. The fitted model provides insight into the effects of chemical structure on soot formation, and cases where its predictions fail reveal the presence of more complicated kinetic sooting mechanisms. Our work will therefore enable the rational design of low-sooting fuel blends from a wide range of feedstocks and chemical functionalities.« less

  17. Measuring and predicting sooting tendencies of oxygenates, alkanes, alkenes, cycloalkanes, and aromatics on a unified scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Dhrubajyoti D.; St. John, Peter C.; McEnally, Charles S.

    Databases of sooting indices, based on measuring some aspect of sooting behavior in a standardized combustion environment, are useful in providing information on the comparative sooting tendencies of different fuels or pure compounds. However, newer biofuels have varied chemical structures including both aromatic and oxygenated functional groups, which expands the chemical space of relevant compounds. In this work, we propose a unified sooting tendency database for pure compounds, including both regular and oxygenated hydrocarbons, which is based on combining two disparate databases of yield-based sooting tendency measurements in the literature. Unification of the different databases was made possible by leveragingmore » the greater dynamic range of the color ratio pyrometry soot diagnostic. This unified database contains a substantial number of pure compounds (≥ 400 total) from multiple categories of hydrocarbons important in modern fuels and establishes the sooting tendencies of aromatic and oxygenated hydrocarbons on the same numeric scale for the first time. Then, using this unified sooting tendency database, we have developed a predictive model for sooting behavior applicable to a broad range of hydrocarbons and oxygenated hydrocarbons. The model decomposes each compound into single-carbon fragments and assigns a sooting tendency contribution to each fragment based on regression against the unified database. The model’s predictive accuracy (as demonstrated by leave-one-out cross-validation) is comparable to a previously developed, more detailed predictive model. The fitted model provides insight into the effects of chemical structure on soot formation, and cases where its predictions fail reveal the presence of more complicated kinetic sooting mechanisms. Our work will therefore enable the rational design of low-sooting fuel blends from a wide range of feedstocks and chemical functionalities.« less

  18. A Prototype Symbolic Model of Canonical Functional Neuroanatomy of the Motor System

    PubMed Central

    Rubin, Daniel L.; Halle, Michael; Musen, Mark; Kikinis, Ron

    2008-01-01

    Recent advances in bioinformatics have opened entire new avenues for organizing, integrating and retrieving neuroscientific data, in a digital, machine-processable format, which can be at the same time understood by humans, using ontological, symbolic data representations. Declarative information stored in ontological format can be perused and maintained by domain experts, interpreted by machines, and serve as basis for a multitude of decision-support, computerized simulation, data mining, and teaching applications. We have developed a prototype symbolic model of canonical neuroanatomy of the motor system. Our symbolic model is intended to support symbolic lookup, logical inference and mathematical modeling by integrating descriptive, qualitative and quantitative functional neuroanatomical knowledge. Furthermore, we show how our approach can be extended to modeling impaired brain connectivity in disease states, such as common movement disorders. In developing our ontology, we adopted a disciplined modeling approach, relying on a set of declared principles, a high-level schema, Aristotelian definitions, and a frame-based authoring system. These features, along with the use of the Unified Medical Language System (UMLS) vocabulary, enable the alignment of our functional ontology with an existing comprehensive ontology of human anatomy, and thus allow for combining the structural and functional views of neuroanatomy for clinical decision support and neuroanatomy teaching applications. Although the scope of our current prototype ontology is limited to a particular functional system in the brain, it may be possible to adapt this approach for modeling other brain functional systems as well. PMID:18164666

  19. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    ERIC Educational Resources Information Center

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  20. Elementary Preservice Teachers' and Elementary Inservice Teachers' Knowledge of Mathematical Modeling

    ERIC Educational Resources Information Center

    Schwerdtfeger, Sara

    2017-01-01

    This study examined the differences in knowledge of mathematical modeling between a group of elementary preservice teachers and a group of elementary inservice teachers. Mathematical modeling has recently come to the forefront of elementary mathematics classrooms because of the call to add mathematical modeling tasks in mathematics classes through…

  1. A Case Study of Teachers' Development of Well-Structured Mathematical Modelling Activities

    ERIC Educational Resources Information Center

    Stohlmann, Micah; Maiorca, Cathrine; Allen, Charlie

    2017-01-01

    This case study investigated how three teachers developed mathematical modelling activities integrated with content standards through participation in a course on mathematical modelling. The class activities involved experiencing a mathematical modelling activity, reading and rating example mathematical modelling activities, reading articles about…

  2. Improving the accuracy in detection of clustered microcalcifications with a context-sensitive classification model.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2016-01-01

    In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.

  3. Food-web based unified model of macro- and microevolution.

    PubMed

    Chowdhury, Debashish; Stauffer, Dietrich

    2003-10-01

    We incorporate the generic hierarchical architecture of foodwebs into a "unified" model that describes both micro- and macroevolutions within a single theoretical framework. This model describes the microevolution in detail by accounting for the birth, ageing, and natural death of individual organisms as well as prey-predator interactions on a hierarchical dynamic food web. It also provides a natural description of random mutations and speciation (origination) of species as well as their extinctions. The distribution of lifetimes of species follows an approximate power law only over a limited regime.

  4. Vector Autoregression, Structural Equation Modeling, and Their Synthesis in Neuroimaging Data Analysis

    PubMed Central

    Chen, Gang; Glen, Daniel R.; Saad, Ziad S.; Hamilton, J. Paul; Thomason, Moriah E.; Gotlib, Ian H.; Cox, Robert W.

    2011-01-01

    Vector autoregression (VAR) and structural equation modeling (SEM) are two popular brain-network modeling tools. VAR, which is a data-driven approach, assumes that connected regions exert time-lagged influences on one another. In contrast, the hypothesis-driven SEM is used to validate an existing connectivity model where connected regions have contemporaneous interactions among them. We present the two models in detail and discuss their applicability to FMRI data, and interpretational limits. We also propose a unified approach that models both lagged and contemporaneous effects. The unifying model, structural vector autoregression (SVAR), may improve statistical and explanatory power, and avoids some prevalent pitfalls that can occur when VAR and SEM are utilized separately. PMID:21975109

  5. Mathematical Modelling Approach in Mathematics Education

    ERIC Educational Resources Information Center

    Arseven, Ayla

    2015-01-01

    The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…

  6. Mathematical Modelling in the Junior Secondary Years: An Approach Incorporating Mathematical Technology

    ERIC Educational Resources Information Center

    Lowe, James; Carter, Merilyn; Cooper, Tom

    2018-01-01

    Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…

  7. Mathematics teachers' conceptions about modelling activities and its reflection on their beliefs about mathematics

    NASA Astrophysics Data System (ADS)

    Shahbari, Juhaina Awawdeh

    2018-07-01

    The current study examines whether the engagement of mathematics teachers in modelling activities and subsequent changes in their conceptions about these activities affect their beliefs about mathematics. The sample comprised 52 mathematics teachers working in small groups in four modelling activities. The data were collected from teachers' Reports about features of each activity, interviews and questionnaires on teachers' beliefs about mathematics. The findings indicated changes in teachers' conceptions about the modelling activities. Most teachers referred to the first activity as a mathematical problem but emphasized only the mathematical notions or the mathematical operations in the modelling process; changes in their conceptions were gradual. Most of the teachers referred to the fourth activity as a mathematical problem and emphasized features of the whole modelling process. The results of the interviews indicated that changes in the teachers' conceptions can be attributed to structure of the activities, group discussions, solution paths and elicited models. These changes about modelling activities were reflected in teachers' beliefs about mathematics. The quantitative findings indicated that the teachers developed more constructive beliefs about mathematics after engagement in the modelling activities and that the difference was significant, however there was no significant difference regarding changes in their traditional beliefs.

  8. Economics and econophysics in the era of Big Data

    NASA Astrophysics Data System (ADS)

    Cheong, Siew Ann

    2016-12-01

    There is an undeniable disconnect between theory-heavy economics and the real world, and some cross polination of ideas with econophysics, which is more balanced between data and models, might help economics along the way to become a truly scientific enterprise. With the coming of the era of Big Data, this transformation of economics into a data-driven science is becoming more urgent. In this article, I use the story of Kepler's discovery of his three laws of planetary motion to enlarge the framework of the scientific approach, from one that focuses on experimental sciences, to one that accommodates observational sciences, and further to one that embraces data mining and machine learning. I distinguish between the ontological values of Kepler's Laws vis-a-vis Newton's Laws, and argue that the latter is more fundamental because it is able to explain the former. I then argue that the fundamental laws of economics lie not in mathematical equations, but in models of adaptive economic agents. With this shift in mind set, it becomes possible to think about how interactions between agents can lead to the emergence of multiple stable states and critical transitions, and complex adaptive policies and regulations that might actually work in the real world. Finally, I discuss how Big Data, exploratory agent-based modeling, and predictive agent-based modeling can come together in a unified framework to make economics a true science.

  9. Have We Achieved a Unified Model of Photoreceptor Cell Fate Specification in Vertebrates?

    PubMed Central

    Raymond, Pamela A.

    2008-01-01

    How does a retinal progenitor choose to differentiate as a rod or a cone and, if it becomes a cone, which one of their different subtypes? The mechanisms of photoreceptor cell fate specification and differentiation have been extensively investigated in a variety of animal model systems, including human and non-human primates, rodents (mice and rats), chickens, frogs (Xenopus) and fish. It appears timely to discuss whether it is possible to synthesize the resulting information into a unified model applicable to all vertebrates. In this review we focus on several widely used experimental animal model systems to highlight differences in photoreceptor properties among species, the diversity of developmental strategies and solutions that vertebrates use to create retinas with photoreceptors that are adapted to the visual needs of their species, and the limitations of the methods currently available for the investigation of photoreceptor cell fate specification. Based on these considerations, we conclude that we are not yet ready to construct a unified model of photoreceptor cell fate specification in the developing vertebrate retina. PMID:17466954

  10. Unification of the general non-linear sigma model and the Virasoro master equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boer, J. de; Halpern, M.B.

    1997-06-01

    The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfymore » the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.« less

  11. An LMI approach to design H(infinity) controllers for discrete-time nonlinear systems based on unified models.

    PubMed

    Liu, Meiqin; Zhang, Senlin

    2008-10-01

    A unified neural network model termed standard neural network model (SNNM) is advanced. Based on the robust L(2) gain (i.e. robust H(infinity) performance) analysis of the SNNM with external disturbances, a state-feedback control law is designed for the SNNM to stabilize the closed-loop system and eliminate the effect of external disturbances. The control design constraints are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms (e.g. interior-point algorithms) to determine the control law. Most discrete-time recurrent neural network (RNNs) and discrete-time nonlinear systems modelled by neural networks or Takagi and Sugeno (T-S) fuzzy models can be transformed into the SNNMs to be robust H(infinity) performance analyzed or robust H(infinity) controller synthesized in a unified SNNM's framework. Finally, some examples are presented to illustrate the wide application of the SNNMs to the nonlinear systems, and the proposed approach is compared with related methods reported in the literature.

  12. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  13. Conceptualising paediatric health disparities: a metanarrative systematic review and unified conceptual framework.

    PubMed

    Ridgeway, Jennifer L; Wang, Zhen; Finney Rutten, Lila J; van Ryn, Michelle; Griffin, Joan M; Murad, M Hassan; Asiedu, Gladys B; Egginton, Jason S; Beebe, Timothy J

    2017-08-04

    There exists a paucity of work in the development and testing of theoretical models specific to childhood health disparities even though they have been linked to the prevalence of adult health disparities including high rates of chronic disease. We conducted a systematic review and thematic analysis of existing models of health disparities specific to children to inform development of a unified conceptual framework. We systematically reviewed articles reporting theoretical or explanatory models of disparities on a range of outcomes related to child health. We searched Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid Embase, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and Scopus (database inception to 9 July 2015). A metanarrative approach guided the analysis process. A total of 48 studies presenting 48 models were included. This systematic review found multiple models but no consensus on one approach. However, we did discover a fair amount of overlap, such that the 48 models reviewed converged into the unified conceptual framework. The majority of models included factors in three domains: individual characteristics and behaviours (88%), healthcare providers and systems (63%), and environment/community (56%), . Only 38% of models included factors in the health and public policies domain. A disease-agnostic unified conceptual framework may inform integration of existing knowledge of child health disparities and guide future research. This multilevel framework can focus attention among clinical, basic and social science research on the relationships between policy, social factors, health systems and the physical environment that impact children's health outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. The coherence problem with th Unified Neutral Theory of biodiversity

    Treesearch

    James S. Clark

    2012-01-01

    The Unified Neutral Theory of Biodiversity (UNTB), proposed as an alternative to niche theory, has been viewed as a theory that species coexist without niche differences, without fitness differences, or with equal probability of success. Support is claimed when models lacking species differences predict highly aggregated metrics, such as species abundance distributions...

  15. The Unified Core: A "Major" Learning Community Model in Action

    ERIC Educational Resources Information Center

    Powell, Gwynn M.; Johnson, Corey W.; James, J. Joy; Dunlap, Rudy

    2011-01-01

    The Unified Core is an innovative approach to higher education that blends content through linked courses within a major to create a community of learners. This article offers the theoretical background for the approach, describes the implementation, and offers suggestions to educators who would like to design their own version of this innovative…

  16. The Impact of Investments in Additional Preparation on Unified State Exam Results

    ERIC Educational Resources Information Center

    Prakhov, Ilya Arkadyevich

    2015-01-01

    The paper proposes a model of educational strategies for college entrants that makes it possible to assess the investment efficiency in additional preparation as evidenced by the Unified State Exam [USE] scores. It was found that college entrants still use traditional forms of preparation despite the new institutional admission conditions at…

  17. Construction of Critically Transformative Education in the Tucson Unified School District

    ERIC Educational Resources Information Center

    Romero, Augustine F.; Sánchez, H. T.

    2014-01-01

    A critically transformative education continues to be at the center of Tucson Unified School District's (TUSD) equity and academic excellence mission. Through the use of the Social Transformation paradigm and the lesson learned from the implementation of the Critically Compassionate Intellectualism Model, TUSD once again created a cutting edge…

  18. In Search of the Unifying Principles of Psychotherapy: Conceptual, Empirical, and Clinical Convergence

    ERIC Educational Resources Information Center

    Magnavita, Jeffrey J.

    2006-01-01

    The search for the principles of unified psychotherapy is an important stage in the advancement of the field. Converging evidence from various streams of clinical science allows the identification of some of the major domains of human functioning, adaptation, and dysfunction. These principles, supported by animal modeling, neuroscience, and…

  19. The 24-Hour Mathematical Modeling Challenge

    ERIC Educational Resources Information Center

    Galluzzo, Benjamin J.; Wendt, Theodore J.

    2015-01-01

    Across the mathematics curriculum there is a renewed emphasis on applications of mathematics and on mathematical modeling. Providing students with modeling experiences beyond the ordinary classroom setting remains a challenge, however. In this article, we describe the 24-hour Mathematical Modeling Challenge, an extracurricular event that exposes…

  20. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    NASA Astrophysics Data System (ADS)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  1. Solution of Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Turner, L. R.

    1960-01-01

    The problem of solving systems of nonlinear equations has been relatively neglected in the mathematical literature, especially in the textbooks, in comparison to the corresponding linear problem. Moreover, treatments that have an appearance of generality fail to discuss the nature of the solutions and the possible pitfalls of the methods suggested. Probably it is unrealistic to expect that a unified and comprehensive treatment of the subject will evolve, owing to the great variety of situations possible, especially in the applied field where some requirement of human or mechanical efficiency is always present. Therefore we attempt here simply to pose the problem and to describe and partially appraise the methods of solution currently in favor.

  2. Integrative Approach for a Transformative Freshman-Level STEM Curriculum

    PubMed Central

    Curran, Kathleen L.; Olsen, Paul E.; Nwogbaga, Agashi P.; Stotts, Stephanie

    2016-01-01

    In 2014 Wesley College adopted a unified undergraduate program of evidence-based high-impact teaching practices. Through foundation and federal and state grant support, the college completely revised its academic core curriculum and strengthened its academic support structures by including a comprehensive early alert system for at-risk students. In this core, science, technology, engineering, and mathematics (STEM) faculty developed fresh manifestations of integrated concept-based introductory courses and revised upper-division STEM courses around student-centered learning. STEM majors can participate in specifically designed paid undergraduate research experiences in directed research elective courses. Such a college-wide multi-tiered approach results in institutional cultural change. PMID:27064213

  3. A description of a system of programs for mathematically processing on unified series (YeS) computers photographic images of the Earth taken from spacecraft

    NASA Technical Reports Server (NTRS)

    Zolotukhin, V. G.; Kolosov, B. I.; Usikov, D. A.; Borisenko, V. I.; Mosin, S. T.; Gorokhov, V. N.

    1980-01-01

    A description of a batch of programs for the YeS-1040 computer combined into an automated system for processing photo (and video) images of the Earth's surface, taken from spacecraft, is presented. Individual programs with the detailed discussion of the algorithmic and programmatic facilities needed by the user are presented. The basic principles for assembling the system, and the control programs are included. The exchange format within whose framework the cataloging of any programs recommended for the system of processing will be activated in the future is displayed.

  4. Groundwater modelling in decision support: reflections on a unified conceptual framework

    NASA Astrophysics Data System (ADS)

    Doherty, John; Simmons, Craig T.

    2013-11-01

    Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.

  5. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  6. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  7. User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program

    NASA Technical Reports Server (NTRS)

    Culver, E. M.; Mccolgan, C. J.

    1993-01-01

    This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.

  8. User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program

    NASA Astrophysics Data System (ADS)

    Culver, E. M.; McColgan, C. J.

    1993-04-01

    This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.

  9. New Constraints on the Unified Model of Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Maiolino, R.; Ruiz, M.; Rieke, G. H.; Keller, L. D.

    1995-06-01

    We present new 10 microns (N-band) photometry for 70 Seyfert galaxies, 43 of them previously unobserved. These observations, together with those collected from the literature, complete the 10 microns photometry for the CfA Sy galaxies and cover 80% of the Sy found in the RSA and 70% of the Sy in the IRAS 12 microns sample. From this data set, we find that Sy not showing any evidence for broad lines are systematically weaker in 10 microns nuclear emission than Sy nuclei having broad lines. This result may indicate the existence of a group of very low-luminosity Sy2 galaxies that do not have Sy1 counterparts in equal numbers, contrary to the strict unified theory. Alternately, the result can be reconciled with unified theories if a specific type of geometry is assumed for the circumnuclear obscuring material. By comparing the 10 microns ground-based observations with the IRAS 12 microns fluxes, we also study the properties of the extended mid-IR emission, i.e., the star forming activity of the host galaxy of the Sy nucleus. We find Sy2 to lie preferentially in galaxies experiencing enhanced star-forming activity, while Sy1 lie in normal or quiescent galaxies. This result appears to be inconsistent with the strict unified model, since the host galaxy properties should be independent of the orientation of a circumnuclear torus and therefore should be independent of nuclear type. Our finding could be explained by adding to the unified model a link between star-forming activity and the amount of obscuring material collected in the circumnuclear region.

  10. Imaging of 2-D multichannel land seismic data using an iterative inversion-migration scheme, Naga Thrust and Fold Belt, Assam, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, Priyank; Dasgupta, Rahul

    2010-05-01

    We demonstrate that imaging of 2-D multichannel land seismic data can be effectively accomplished by a combination of reflection traveltime tomography and pre-stack depth migration (PSDM); we refer to the combined process as "the unified imaging". The unified imaging comprises cyclic runs of joint reflection and direct arrival inversion and pre-stack depth migration. From one cycle to another, both the inversion and the migration provide mutual feedbacks that are guided by the geological interpretation. The unified imaging is implemented in two broad stages. The first stage is similar to the conventional imaging except that it involves a significant use of velocity model from the inversion of the direct arrivals for both datuming and stacking velocity analysis. The first stage ends with an initial interval velocity model (from the stacking velocity analysis) and a corresponding depth migrated image. The second stage updates the velocity model and the depth image from the first stage in a cyclic manner; a single cycle comprises a single run of reflection traveltime inversion followed by PSDM. Interfaces used in the inversion are interpretations of the PSDM image in the previous cycle and the velocity model used in PSDM is from the joint inversion in the current cycle. Additionally in every cycle interpreted horizons in the stacked data are inverted as zero-offset reflections for constraining the interfaces; the velocity model is maintained stationary for the zero-offset inversion. A congruency factor, j, which measures the discrepancy between interfaces from the interpretation of the PSDM image and their corresponding counterparts from the inversion of the zero-offset reflections within assigned uncertainties, is computed in every cycle. A value of unity for jindicates that images from both the inversion and the migration are equivalent; at this point the unified imaging is said to have converged and is halted. We apply the unified imaging to 2-D multichannel seismic data from the Naga Thrust and Fold Belt (NTFB), India, were several exploratory wells in the last decade targeting sub-thrust leads in the footwall have failed. This failure is speculatively due to incorrect depth images which are in turn attributed to incorrect velocity models that are developed using conventional methods. The 2-D seismic data in this study is acquired perpendicular to the trend of the NTFB where the outcropping hanging wall has a topographic culmination. The acquisition style is split-spread with 30 m shot and receiver spacing and a nominal fold of 90. The data are recorded with a sample interval of 2 ms. Overall the data have a moderate signal-to-noise ratio and a broad frequency bandwidth of 8-80 Hz. The seismic line contains the failed exploratory well in the central part. The final results from unified imaging (both the depth image and the corresponding velocity model) suggest presence of a triangle zone, which was previously undiscovered. Conventional imaging had falsely portrayed the triangle zone as structural high which was interpreted as an anticline. As a result, the exploratory well, meant to target the anticline, met with pressure changes which were neither expected nor explained. The unified imaging results not only explain the observations in the well but also reveal new leads in the region. The velocity model from unified imaging was also found to be adequate for frequency-domain full-waveform imaging of the hanging wall. Results from waveform inversion are further corroborated by the geological interpretation of the exploratory well.

  11. REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory

    NASA Astrophysics Data System (ADS)

    Tin, Chung; Poon, Chi-Sang

    2005-09-01

    Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

  12. Scaling Laws of Discrete-Fracture-Network Models

    NASA Astrophysics Data System (ADS)

    Philippe, D.; Olivier, B.; Caroline, D.; Jean-Raynald, D.

    2006-12-01

    The statistical description of fracture networks through scale still remains a concern for geologists, considering the complexity of fracture networks. A challenging task of the last 20-years studies has been to find a solid and rectifiable rationale to the trivial observation that fractures exist everywhere and at all sizes. The emergence of fractal models and power-law distributions quantifies this fact, and postulates in some ways that small-scale fractures are genetically linked to their larger-scale relatives. But the validation of these scaling concepts still remains an issue considering the unreachable amount of information that would be necessary with regards to the complexity of natural fracture networks. Beyond the theoretical interest, a scaling law is a basic and necessary ingredient of Discrete-Fracture-Network models (DFN) that are used for many environmental and industrial applications (groundwater resources, mining industry, assessment of the safety of deep waste disposal sites, ..). Indeed, such a function is necessary to assemble scattered data, taken at different scales, into a unified scaling model, and to interpolate fracture densities between observations. In this study, we discuss some important issues related to scaling laws of DFN: - We first describe a complete theoretical and mathematical framework that takes account of both the fracture- size distribution and the fracture clustering through scales (fractal dimension). - We review the scaling laws that have been obtained, and we discuss the ability of fracture datasets to really constrain the parameters of the DFN model. - And finally we discuss the limits of scaling models.

  13. The Relationship between Students' Performance on Conventional Standardized Mathematics Assessments and Complex Mathematical Modeling Problems

    ERIC Educational Resources Information Center

    Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.

    2016-01-01

    Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…

  14. Annual Perspectives in Mathematics Education 2016: Mathematical Modeling and Modeling Mathematics

    ERIC Educational Resources Information Center

    Hirsch, Christian R., Ed.; McDuffie, Amy Roth, Ed.

    2016-01-01

    Mathematical modeling plays an increasingly important role both in real-life applications--in engineering, business, the social sciences, climate study, advanced design, and more--and within mathematics education itself. This 2016 volume of "Annual Perspectives in Mathematics Education" ("APME") focuses on this key topic from a…

  15. Rare events in finite and infinite dimensions

    NASA Astrophysics Data System (ADS)

    Reznikoff, Maria G.

    Thermal noise introduces stochasticity into deterministic equations and makes possible events which are never seen in the zero temperature setting. The driving force behind the thesis work is a desire to bring analysis and probability to bear on a class of relevant and intriguing physical problems, and in so doing, to allow applications to drive the development of new mathematical theory. The unifying theme is the study of rare events under the influence of small, random perturbations, and the manifold mathematical problems which ensue. In the first part, we apply large deviation theory and prefactor estimates to a coherent rotation micromagnetic model in order to analyze thermally activated magnetic switching. We consider recent physical experiments and the mathematical questions "asked" by them. A stochastic resonance type phenomenon is discovered, leading to the definition of finite temperature astroids. Non-Arrhenius behavior is discussed. The analysis is extended to ramped astroids. In addition, we discover that for low damping and ultrashort pulses, deterministic effects can override thermal effects, in accord with very recent ultrashort pulse experiments. Even more interesting, perhaps, is the study of large deviations in the infinite dimensional context, i.e. in spatially extended systems. Inspired by recent numerical investigations, we study the stochastically perturbed Allen Cahn and Cahn Hilliard equations. For the Allen Cahn equation, we study the action minimization problem (a deterministic variational problem) and prove the action scaling in four parameter regimes, via upper and lower bounds. The sharp interface limit is studied. We formally derive a reduced action functional which lends insight into the connection between action minimization and curvature flow. For the Cahn Hilliard equation, we prove upper and lower bounds for the scaling of the energy barrier in the nucleation and growth regime. Finally, we consider rare events in large or infinite domains, in one spatial dimension. We introduce a natural reference measure through which to analyze the invariant measure of stochastically perturbed, nonlinear partial differential equations. Also, for noisy reaction diffusion equations with an asymmetric potential, we discover how to rescale space and time in order to map the dynamics in the zero temperature limit to the Poisson Model, a simple version of the Johnson-Mehl-Avrami-Kolmogorov model for nucleation and growth.

  16. Mathematical Modeling: A Bridge to STEM Education

    ERIC Educational Resources Information Center

    Kertil, Mahmut; Gurel, Cem

    2016-01-01

    The purpose of this study is making a theoretical discussion on the relationship between mathematical modeling and integrated STEM education. First of all, STEM education perspective and the construct of mathematical modeling in mathematics education is introduced. A review of literature is provided on how mathematical modeling literature may…

  17. Global Consensus Theorem and Self-Organized Criticality: Unifying Principles for Understanding Self-Organization, Swarm Intelligence and Mechanisms of Carcinogenesis

    PubMed Central

    Rosenfeld, Simon

    2013-01-01

    Complex biological systems manifest a large variety of emergent phenomena among which prominent roles belong to self-organization and swarm intelligence. Generally, each level in a biological hierarchy possesses its own systemic properties and requires its own way of observation, conceptualization, and modeling. In this work, an attempt is made to outline general guiding principles in exploration of a wide range of seemingly dissimilar phenomena observed in large communities of individuals devoid of any personal intelligence and interacting with each other through simple stimulus-response rules. Mathematically, these guiding principles are well captured by the Global Consensus Theorem (GCT) equally applicable to neural networks and to Lotka-Volterra population dynamics. Universality of the mechanistic principles outlined by GCT allows for a unified approach to such diverse systems as biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, social networks and many other systems. Another cluster of universal laws governing the self-organization in large communities of locally interacting individuals is built around the principle of self-organized criticality (SOC). The GCT and SOC, separately or in combination, provide a conceptual basis for understanding the phenomena of self-organization occurring in large communities without involvement of a supervisory authority, without system-wide informational infrastructure, and without mapping of general plan of action onto cognitive/behavioral faculties of its individual members. Cancer onset and proliferation serves as an important example of application of these conceptual approaches. In this paper, the point of view is put forward that apparently irreconcilable contradictions between two opposing theories of carcinogenesis, that is, the Somatic Mutation Theory and the Tissue Organization Field Theory, may be resolved using the systemic approaches provided by GST and SOC. PMID:23471309

  18. A unified analytical drain current model for Double-Gate Junctionless Field-Effect Transistors including short channel effects

    NASA Astrophysics Data System (ADS)

    Raksharam; Dutta, Aloke K.

    2017-04-01

    In this paper, a unified analytical model for the drain current of a symmetric Double-Gate Junctionless Field-Effect Transistor (DG-JLFET) is presented. The operation of the device has been classified into four modes: subthreshold, semi-depleted, accumulation, and hybrid; with the main focus of this work being on the accumulation mode, which has not been dealt with in detail so far in the literature. A physics-based model, using a simplified one-dimensional approach, has been developed for this mode, and it has been successfully integrated with the model for the hybrid mode. It also includes the effect of carrier mobility degradation due to the transverse electric field, which was hitherto missing in the earlier models reported in the literature. The piece-wise models have been unified using suitable interpolation functions. In addition, the model includes two most important short-channel effects pertaining to DG-JLFETs, namely the Drain Induced Barrier Lowering (DIBL) and the Subthreshold Swing (SS) degradation. The model is completely analytical, and is thus computationally highly efficient. The results of our model have shown an excellent match with those obtained from TCAD simulations for both long- and short-channel devices, as well as with the experimental data reported in the literature.

  19. An Empirical Analysis of Citizens' Acceptance Decisions of Electronic-Government Services: A Modification of the Unified Theory of Acceptance and Use of Technology (UTAUT) Model to Include Trust as a Basis for Investigation

    ERIC Educational Resources Information Center

    Awuah, Lawrence J.

    2012-01-01

    Understanding citizens' adoption of electronic-government (e-government) is an important topic, as the use of e-government has become an integral part of governance. Success of such initiatives depends largely on the efficient use of e-government services. The unified theory of acceptance and use of technology (UTAUT) model has provided a…

  20. The influence of mathematics learning using SAVI approach on junior high school students’ mathematical modelling ability

    NASA Astrophysics Data System (ADS)

    Khusna, H.; Heryaningsih, N. Y.

    2018-01-01

    The aim of this research was to examine mathematical modeling ability who learn mathematics by using SAVI approach. This research was a quasi-experimental research with non-equivalent control group designed by using purposive sampling technique. The population of this research was the state junior high school students in Lembang while the sample consisted of two class at 8th grade. The instrument used in this research was mathematical modeling ability. Data analysis of this research was conducted by using SPSS 20 by Windows. The result showed that students’ ability of mathematical modeling who learn mathematics by using SAVI approach was better than students’ ability of mathematical modeling who learn mathematics using conventional learning.

  1. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  2. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  3. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work to develop a unified control/structures modeling and design capability for large space structures modeling are presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. Parallel research done by other researchers is reviewed. The development of a methodology for global design optimization is recommended as a long-term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization.

  4. SSBRP Communication & Data System Development using the Unified Modeling Language (UML)

    NASA Technical Reports Server (NTRS)

    Windrem, May; Picinich, Lou; Givens, John J. (Technical Monitor)

    1998-01-01

    The Unified Modeling Language (UML) is the standard method for specifying, visualizing, and documenting the artifacts of an object-oriented system under development. UML is the unification of the object-oriented methods developed by Grady Booch and James Rumbaugh, and of the Use Case Model developed by Ivar Jacobson. This paper discusses the application of UML by the Communications and Data Systems (CDS) team to model the ground control and command of the Space Station Biological Research Project (SSBRP) User Operations Facility (UOF). UML is used to define the context of the system, the logical static structure, the life history of objects, and the interactions among objects.

  5. Beyond Motivation: Exploring Mathematical Modeling as a Context for Deepening Students' Understandings of Curricular Mathematics

    ERIC Educational Resources Information Center

    Zbiek, Rose Mary; Conner, Annamarie

    2006-01-01

    Views of mathematical modeling in empirical, expository, and curricular references typically capture a relationship between real-world phenomena and mathematical ideas from the perspective that competence in mathematical modeling is a clear goal of the mathematics curriculum. However, we work within a curricular context in which mathematical…

  6. An Investigation of Mathematical Modeling with Pre-Service Secondary Mathematics Teachers

    ERIC Educational Resources Information Center

    Thrasher, Emily Plunkett

    2016-01-01

    The goal of this thesis was to investigate and enhance our understanding of what occurs while pre-service mathematics teachers engage in a mathematical modeling unit that is broadly based upon mathematical modeling as defined by the Common Core State Standards for Mathematics (National Governors Association Center for Best Practices & Council…

  7. Reflective Modeling in Teacher Education.

    ERIC Educational Resources Information Center

    Shealy, Barry E.

    This paper describes mathematical modeling activities from a secondary mathematics teacher education course taken by fourth-year university students. Experiences with mathematical modeling are viewed as important in helping teachers develop a more intuitive understanding of mathematics, generate and evaluate mathematical interpretations, and…

  8. Primary School Pre-Service Mathematics Teachers' Views on Mathematical Modeling

    ERIC Educational Resources Information Center

    Karali, Diren; Durmus, Soner

    2015-01-01

    The current study aimed to identify the views of pre-service teachers, who attended a primary school mathematics teaching department but did not take mathematical modeling courses. The mathematical modeling activity used by the pre-service teachers was developed with regards to the modeling activities utilized by Lesh and Doerr (2003) in their…

  9. A unified physical model of Seebeck coefficient in amorphous oxide semiconductor thin-film transistors

    NASA Astrophysics Data System (ADS)

    Lu, Nianduan; Li, Ling; Sun, Pengxiao; Banerjee, Writam; Liu, Ming

    2014-09-01

    A unified physical model for Seebeck coefficient was presented based on the multiple-trapping and release theory for amorphous oxide semiconductor thin-film transistors. According to the proposed model, the Seebeck coefficient is attributed to the Fermi-Dirac statistics combined with the energy dependent trap density of states and the gate-voltage dependence of the quasi-Fermi level. The simulation results show that the gate voltage, energy disorder, and temperature dependent Seebeck coefficient can be well described. The calculation also shows a good agreement with the experimental data in amorphous In-Ga-Zn-O thin-film transistor.

  10. Unified Viscoplastic Behavior of Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Robinson, D. N.; Bartolotta, P. A.

    1992-01-01

    The need for unified constitutive models was recognized more than a decade ago in the results of phenomenological tests on monolithic metals that exhibited strong creep-plasticity interaction. Recently, metallic alloys have been combined to form high-temperature ductile/ductile composite materials, raising the natural question of whether these metallic composites exhibit the same phenomenological features as their monolithic constituents. This question is addressed in the context of a limited, yet definite (to illustrate creep/plasticity interaction) set of experimental data on the model metal matrix composite (MMC) system W/Kanthal. Furthermore, it is demonstrated that a unified viscoplastic representation, extended for unidirectional composites and correlated to W/Kanthal, can accurately predict the observed longitudinal composite creep/plasticity interaction response and strain rate dependency. Finally, the predicted influence of fiber orientation on the creep response of W/Kanthal is illustrated.

  11. Four Courses within a Discipline: UGA Unified Core

    ERIC Educational Resources Information Center

    Powell, Gwynn M.; Johnson, Corey W.; James, Joy; Dunlap, Rudy

    2013-01-01

    This article introduces the reader to the Unified Core Curriculum model developed and implemented at the University of Georgia (UGA). Four courses are taught as one course to the juniors coming into the Recreation and Leisure Studies major. An overview of the blended course and sample assignments are provided, as well as a discussion of challenges…

  12. Asymptotic theory of neutral stability of the Couette flow of a vibrationally excited gas

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yu. N.; Ershov, I. V.

    2017-01-01

    An asymptotic theory of the neutral stability curve for a supersonic plane Couette flow of a vibrationally excited gas is developed. The initial mathematical model consists of equations of two-temperature viscous gas dynamics, which are used to derive a spectral problem for a linear system of eighth-order ordinary differential equations within the framework of the classical linear stability theory. Unified transformations of the system for all shear flows are performed in accordance with the classical Lin scheme. The problem is reduced to an algebraic secular equation with separation into the "inviscid" and "viscous" parts, which is solved numerically. It is shown that the thus-calculated neutral stability curves agree well with the previously obtained results of the direct numerical solution of the original spectral problem. In particular, the critical Reynolds number increases with excitation enhancement, and the neutral stability curve is shifted toward the domain of higher wave numbers. This is also confirmed by means of solving an asymptotic equation for the critical Reynolds number at the Mach number M ≤ 4.

  13. An analysis of a large dataset on immigrant integration in Spain. The Statistical Mechanics perspective on Social Action

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-02-01

    How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.

  14. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  15. Fully coupled methods for multiphase morphodynamics

    NASA Astrophysics Data System (ADS)

    Michoski, C.; Dawson, C.; Mirabito, C.; Kubatko, E. J.; Wirasaet, D.; Westerink, J. J.

    2013-09-01

    We present numerical methods for a system of equations consisting of the two dimensional Saint-Venant shallow water equations (SWEs) fully coupled to a completely generalized Exner formulation of hydrodynamically driven sediment discharge. This formulation is implemented by way of a discontinuous Galerkin (DG) finite element method, using a Roe Flux for the advective components and the unified form for the dissipative components. We implement a number of Runge-Kutta time integrators, including a family of strong stability preserving (SSP) schemes, and Runge-Kutta Chebyshev (RKC) methods. A brief discussion is provided regarding implementational details for generalizable computer algebra tokenization using arbitrary algebraic fluxes. We then run numerical experiments to show standard convergence rates, and discuss important mathematical and numerical nuances that arise due to prominent features in the coupled system, such as the emergence of nondifferentiable and sharp zero crossing functions, radii of convergence in manufactured solutions, and nonconservative product (NCP) formalisms. Finally we present a challenging application model concerning hydrothermal venting across metalliferous muds in the presence of chemical reactions occurring in low pH environments.

  16. Representing Thoughts, Words, and Things in the UMLS

    PubMed Central

    Campbell, Keith E.; Oliver, Diane E.; Spackman, Kent A.; Shortliffe, Edward H.

    1998-01-01

    The authors describe a framework, based on the Ogden-Richards semiotic triangle, for understanding the relationship between the Unified Medical Language System (UMLS) and the source terminologies from which the UMLS derives its content. They pay particular attention to UMLS's Concept Unique Identifier (CUI) and the sense of “meaning” it represents as contrasted with the sense of “meaning” represented by the source terminologies. The CUI takes on emergent meaning through linkage to terms in different terminology systems. In some cases, a CUI's emergent meaning can differ significantly from the original sources' intended meanings of terms linked by that CUI. Identification of these different senses of meaning within the UMLS is consistent with historical themes of semantic interpretation of language. Examination of the UMLS within such a historical framework makes it possible to better understand the strengths and limitations of the UMLS approach for integrating disparate terminologic systems and to provide a model, or theoretic foundation, for evaluating the UMLS as a Possible World—that is, as a mathematical formalism that represents propositions about some perspective or interpretation of the physical world. PMID:9760390

  17. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  18. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  19. Psychometric evaluation of a unified Portuguese-language version of the Body Shape Questionnaire in female university students.

    PubMed

    Silva, Wanderson Roberto; Costa, David; Pimenta, Filipa; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2016-07-21

    The objectives of this study were to develop a unified Portuguese-language version, for use in Brazil and Portugal, of the Body Shape Questionnaire (BSQ) and to estimate its validity, reliability, and internal consistency in Brazilian and Portuguese female university students. Confirmatory factor analysis was performed using both original (34-item) and shortened (8-item) versions. The model's fit was assessed with χ²/df, CFI, NFI, and RMSEA. Concurrent and convergent validity were assessed. Reliability was estimated through internal consistency and composite reliability (α). Transnational invariance of the BSQ was tested using multi-group analysis. The original 32-item model was refined to present a better fit and adequate validity and reliability. The shortened model was stable in both independent samples and in transnational samples (Brazil and Portugal). The use of this unified version is recommended for the assessment of body shape concerns in both Brazilian and Portuguese college students.

  20. The implementation of multiple intelligences based teaching model to improve mathematical problem solving ability for student of junior high school

    NASA Astrophysics Data System (ADS)

    Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli

    2017-05-01

    This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.

  1. Using Mathematics, Mathematical Applications, Mathematical Modelling, and Mathematical Literacy: A Theoretical Study

    ERIC Educational Resources Information Center

    Mumcu, Hayal Yavuz

    2016-01-01

    The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…

  2. Pre-Service Teachers' Developing Conceptions about the Nature and Pedagogy of Mathematical Modeling in the Context of a Mathematical Modeling Course

    ERIC Educational Resources Information Center

    Cetinkaya, Bulent; Kertil, Mahmut; Erbas, Ayhan Kursat; Korkmaz, Himmet; Alacaci, Cengiz; Cakiroglu, Erdinc

    2016-01-01

    Adopting a multitiered design-based research perspective, this study examines pre-service secondary mathematics teachers' developing conceptions about (a) the nature of mathematical modeling in simulations of "real life" problem solving, and (b) pedagogical principles and strategies needed to teach mathematics through modeling. Unlike…

  3. Evolution of Mathematics Teachers' Pedagogical Knowledge When They Are Teaching through Modeling

    ERIC Educational Resources Information Center

    Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Alacaci, Cengiz; Cakiroglu, Erdinc; Cetinkaya, Bulent

    2017-01-01

    Use of mathematical modeling in mathematics education has been receiving significant attention as a way to develop students' mathematical knowledge and skills. As effective use of modeling in classes depends on the competencies of teachers we need to know more about the nature of teachers' knowledge to use modeling in mathematics education and how…

  4. Mathematical Modeling in Science: Using Spreadsheets to Create Mathematical Models and Address Scientific Inquiry

    ERIC Educational Resources Information Center

    Horton, Robert M.; Leonard, William H.

    2005-01-01

    In science, inquiry is used as students explore important and interesting questions concerning the world around them. In mathematics, one contemporary inquiry approach is to create models that describe real phenomena. Creating mathematical models using spreadsheets can help students learn at deep levels in both science and mathematics, and give…

  5. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan, E-mail: kannan.krishnan@umontreal.ca

    The algorithms in the literature focusing to predict tissue:blood PC (P{sub tb}) for environmental chemicals and tissue:plasma PC based on total (K{sub p}) or unbound concentration (K{sub pu}) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P{sub tb}, K{sub p} and K{sub pu} for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such amore » way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P{sub tb}, K{sub p} or K{sub pu} of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.« less

  6. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  7. An atom is known by the company it keeps: Content, representation and pedagogy within the epistemic revolution of the complexity sciences

    NASA Astrophysics Data System (ADS)

    Blikstein, Paulo

    The goal of this dissertation is to explore relations between content, representation, and pedagogy, so as to understand the impact of the nascent field of complexity sciences on science, technology, engineering and mathematics (STEM) learning. Wilensky & Papert coined the term "structurations" to express the relationship between knowledge and its representational infrastructure. A change from one representational infrastructure to another they call a "restructuration." The complexity sciences have introduced a novel and powerful structuration: agent-based modeling. In contradistinction to traditional mathematical modeling, which relies on equational descriptions of macroscopic properties of systems, agent-based modeling focuses on a few archetypical micro-behaviors of "agents" to explain emergent macro-behaviors of the agent collective. Specifically, this dissertation is about a series of studies of undergraduate students' learning of materials science, in which two structurations are compared (equational and agent-based), consisting of both design research and empirical evaluation. I have designed MaterialSim, a constructionist suite of computer models, supporting materials and learning activities designed within the approach of agent-based modeling, and over four years conducted an empirical inves3 tigation of an undergraduate materials science course. The dissertation is comprised of three studies: Study 1 - diagnosis . I investigate current representational and pedagogical practices in engineering classrooms. Study 2 - laboratory studies. I investigate the cognition of students engaging in scientific inquiry through programming their own scientific models. Study 3 - classroom implementation. I investigate the characteristics, advantages, and trajectories of scientific content knowledge that is articulated in epistemic forms and representational infrastructures unique to complexity sciences, as well as the feasibility of the integration of constructionist, agent-based learning environments in engineering classrooms. Data sources include classroom observations, interviews, videotaped sessions of model-building, questionnaires, analysis of computer-generated logfiles, and quantitative and qualitative analysis of artifacts. Results shows that (1) current representational and pedagogical practices in engineering classrooms were not up to the challenge of the complex content being taught, (2) by building their own scientific models, students developed a deeper understanding of core scientific concepts, and learned how to better identify unifying principles and behaviors in materials science, and (3) programming computer models was feasible within a regular engineering classroom.

  8. Mathematical Modeling and Pure Mathematics

    ERIC Educational Resources Information Center

    Usiskin, Zalman

    2015-01-01

    Common situations, like planning air travel, can become grist for mathematical modeling and can promote the mathematical ideas of variables, formulas, algebraic expressions, functions, and statistics. The purpose of this article is to illustrate how the mathematical modeling that is present in everyday situations can be naturally embedded in…

  9. Understanding Prospective Teachers' Mathematical Modeling Processes in the Context of a Mathematical Modeling Course

    ERIC Educational Resources Information Center

    Zeytun, Aysel Sen; Cetinkaya, Bulent; Erbas, Ayhan Kursat

    2017-01-01

    This paper investigates how prospective teachers develop mathematical models while they engage in modeling tasks. The study was conducted in an undergraduate elective course aiming to improve prospective teachers' mathematical modeling abilities, while enhancing their pedagogical knowledge for the integrating of modeling tasks into their future…

  10. OmniPHR: A distributed architecture model to integrate personal health records.

    PubMed

    Roehrs, Alex; da Costa, Cristiano André; da Rosa Righi, Rodrigo

    2017-07-01

    The advances in the Information and Communications Technology (ICT) brought many benefits to the healthcare area, specially to digital storage of patients' health records. However, it is still a challenge to have a unified viewpoint of patients' health history, because typically health data is scattered among different health organizations. Furthermore, there are several standards for these records, some of them open and others proprietary. Usually health records are stored in databases within health organizations and rarely have external access. This situation applies mainly to cases where patients' data are maintained by healthcare providers, known as EHRs (Electronic Health Records). In case of PHRs (Personal Health Records), in which patients by definition can manage their health records, they usually have no control over their data stored in healthcare providers' databases. Thereby, we envision two main challenges regarding PHR context: first, how patients could have a unified view of their scattered health records, and second, how healthcare providers can access up-to-date data regarding their patients, even though changes occurred elsewhere. For addressing these issues, this work proposes a model named OmniPHR, a distributed model to integrate PHRs, for patients and healthcare providers use. The scientific contribution is to propose an architecture model to support a distributed PHR, where patients can maintain their health history in an unified viewpoint, from any device anywhere. Likewise, for healthcare providers, the possibility of having their patients data interconnected among health organizations. The evaluation demonstrates the feasibility of the model in maintaining health records distributed in an architecture model that promotes a unified view of PHR with elasticity and scalability of the solution. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Introducing Modeling Transition Diagrams as a Tool to Connect Mathematical Modeling to Mathematical Thinking

    ERIC Educational Resources Information Center

    Czocher, Jennifer A.

    2016-01-01

    This study contributes a methodological tool to reconstruct the cognitive processes and mathematical activities carried out by mathematical modelers. Represented as Modeling Transition Diagrams (MTDs), individual modeling routes were constructed for four engineering undergraduate students. Findings stress the importance and limitations of using…

  12. An Experimental Approach to Mathematical Modeling in Biology

    ERIC Educational Resources Information Center

    Ledder, Glenn

    2008-01-01

    The simplest age-structured population models update a population vector via multiplication by a matrix. These linear models offer an opportunity to introduce mathematical modeling to students of limited mathematical sophistication and background. We begin with a detailed discussion of mathematical modeling, particularly in a biological context.…

  13. Mathematical Modeling with Middle School Students: The Robot Art Model-Eliciting Activity

    ERIC Educational Resources Information Center

    Stohlmann, Micah S.

    2017-01-01

    Internationally mathematical modeling is garnering more attention for the benefits associated with it. Mathematical modeling can develop students' communication skills and the ability to demonstrate understanding through different representations. With the increased attention on mathematical modeling, there is a need for more curricula to be…

  14. Changing Pre-Service Mathematics Teachers' Beliefs about Using Computers for Teaching and Learning Mathematics: The Effect of Three Different Models

    ERIC Educational Resources Information Center

    Karatas, Ilhan

    2014-01-01

    This study examines the effect of three different computer integration models on pre-service mathematics teachers' beliefs about using computers in mathematics education. Participants included 104 pre-service mathematics teachers (36 second-year students in the Computer Oriented Model group, 35 fourth-year students in the Integrated Model (IM)…

  15. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  16. Mathematical Modeling: A Structured Process

    ERIC Educational Resources Information Center

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  17. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  18. To Assess Students' Attitudes, Skills and Competencies in Mathematical Modeling

    ERIC Educational Resources Information Center

    Lingefjard, Thomas; Holmquist, Mikael

    2005-01-01

    Peer-to-peer assessment, take-home exams and a mathematical modeling survey were used to monitor and assess students' attitudes, skills and competencies in mathematical modeling. The students were all in a secondary mathematics, teacher education program with a comprehensive amount of mathematics studies behind them. Findings indicate that…

  19. Mathematical Modeling in the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Toews, Carl

    2012-01-01

    Mathematical modeling occupies an unusual space in the undergraduate mathematics curriculum: typically an "advanced" course, it nonetheless has little to do with formal proof, the usual hallmark of advanced mathematics. Mathematics departments are thus forced to decide what role they want the modeling course to play, both as a component of the…

  20. Teachers' Conceptions of Mathematical Modeling

    ERIC Educational Resources Information Center

    Gould, Heather

    2013-01-01

    The release of the "Common Core State Standards for Mathematics" in 2010 resulted in a new focus on mathematical modeling in United States curricula. Mathematical modeling represents a way of doing and understanding mathematics new to most teachers. The purpose of this study was to determine the conceptions and misconceptions held by…

  1. Experimentation of cooperative learning model Numbered Heads Together (NHT) type by concept maps and Teams Games Tournament (TGT) by concept maps in terms of students logical mathematics intellegences

    NASA Astrophysics Data System (ADS)

    Irawan, Adi; Mardiyana; Retno Sari Saputro, Dewi

    2017-06-01

    This research is aimed to find out the effect of learning model towards learning achievement in terms of students’ logical mathematics intelligences. The learning models that were compared were NHT by Concept Maps, TGT by Concept Maps, and Direct Learning model. This research was pseudo experimental by factorial design 3×3. The population of this research was all of the students of class XI Natural Sciences of Senior High School in all regency of Karanganyar in academic year 2016/2017. The conclusions of this research were: 1) the students’ achievements with NHT learning model by Concept Maps were better than students’ achievements with TGT model by Concept Maps and Direct Learning model. The students’ achievements with TGT model by Concept Maps were better than the students’ achievements with Direct Learning model. 2) The students’ achievements that exposed high logical mathematics intelligences were better than students’ medium and low logical mathematics intelligences. The students’ achievements that exposed medium logical mathematics intelligences were better than the students’ low logical mathematics intelligences. 3) Each of student logical mathematics intelligences with NHT learning model by Concept Maps has better achievement than students with TGT learning model by Concept Maps, students with NHT learning model by Concept Maps have better achievement than students with the direct learning model, and the students with TGT by Concept Maps learning model have better achievement than students with Direct Learning model. 4) Each of learning model, students who have logical mathematics intelligences have better achievement then students who have medium logical mathematics intelligences, and students who have medium logical mathematics intelligences have better achievement than students who have low logical mathematics intelligences.

  2. Factors Affecting Acceptance & Use of ReWIND: Validating the Extended Unified Theory of Acceptance and Use of Technology

    ERIC Educational Resources Information Center

    Nair, Pradeep Kumar; Ali, Faizan; Leong, Lim Chee

    2015-01-01

    Purpose: This study aims to explain the factors affecting students' acceptance and usage of a lecture capture system (LCS)--ReWIND--in a Malaysian university based on the extended unified theory of acceptance and use of technology (UTAUT2) model. Technological advances have become an important feature of universities' plans to improve the…

  3. Laminar Cortical Dynamics of Cognitive and Motor Working Memory, Sequence Learning and Performance: Toward a Unified Theory of How the Cerebral Cortex Works

    ERIC Educational Resources Information Center

    Grossberg, Stephen; Pearson, Lance R.

    2008-01-01

    How does the brain carry out working memory storage, categorization, and voluntary performance of event sequences? The LIST PARSE neural model proposes an answer that unifies the explanation of cognitive, neurophysiological, and anatomical data. It quantitatively simulates human cognitive data about immediate serial recall and free recall, and…

  4. Pre-Service Teachers' Modelling Processes through Engagement with Model Eliciting Activities with a Technological Tool

    ERIC Educational Resources Information Center

    Daher, Wajeeh M.; Shahbari, Juhaina Awawdeh

    2015-01-01

    Engaging mathematics students with modelling activities helps them learn mathematics meaningfully. This engagement, in the case of model eliciting activities, helps the students elicit mathematical models by interpreting real-world situation in mathematical ways. This is especially true when the students utilize technology to build the models.…

  5. Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).

    PubMed

    Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco

    2013-10-01

    In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Revisiting chemoaffinity theory: Chemotactic implementation of topographic axonal projection

    PubMed Central

    2017-01-01

    Neural circuits are wired by chemotactic migration of growth cones guided by extracellular guidance cue gradients. How growth cone chemotaxis builds the macroscopic structure of the neural circuit is a fundamental question in neuroscience. I addressed this issue in the case of the ordered axonal projections called topographic maps in the retinotectal system. In the retina and tectum, the erythropoietin-producing hepatocellular (Eph) receptors and their ligands, the ephrins, are expressed in gradients. According to Sperry’s chemoaffinity theory, gradients in both the source and target areas enable projecting axons to recognize their proper terminals, but how axons chemotactically decode their destinations is largely unknown. To identify the chemotactic mechanism of topographic mapping, I developed a mathematical model of intracellular signaling in the growth cone that focuses on the growth cone’s unique chemotactic property of being attracted or repelled by the same guidance cues in different biological situations. The model presented mechanism by which the retinal growth cone reaches the correct terminal zone in the tectum through alternating chemotactic response between attraction and repulsion around a preferred concentration. The model also provided a unified understanding of the contrasting relationships between receptor expression levels and preferred ligand concentrations in EphA/ephrinA- and EphB/ephrinB-encoded topographic mappings. Thus, this study redefines the chemoaffinity theory in chemotactic terms. PMID:28792499

  7. Analytical and numerical solutions of the potential and electric field generated by different electrode arrays in a tumor tissue under electrotherapy

    PubMed Central

    2011-01-01

    Background Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Methods Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Results Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. Conclusion The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections. PMID:21943385

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, Marshall D.

    Extension of the Förster analogue for the ET rate constant (based on virtual intermediate electron detachment or attachment states) with inclusion of site–site correlation due to coulomb terms associated with solvent reorganization energy and the driving force, has been developed and illustrated for a simple three-state, two-mode model. Furthermore, the model is applicable to charge separation (CS), recombination (CR), and shift (CSh) ET processes, with or without an intervening bridge. The model provides a unified perspective on the role of virtual intermediate states in accounting for the thermal Franck–Condon weighted density of states (FCWD), the gaps controlling superexchange coupling, andmore » mean absolute redox potentials, with full accommodation of site–site coulomb interactions. We analyzed two types of correlation: aside from the site–site correlation due to coulomb interactions, we have emphasized the intrinsic “nonorthogonality” which generally pertains to reaction coordinates (RCs) for different ET processes involving multiple electronic states, as may be expressed by suitably defined direction cosines (cos(θ)). A pair of RCs may be nonorthogonal even when the site–site coulomb correlations are absent. While different RCs are linearly independent in the mathematical sense for all θ ≠ 0°, they are independent in the sense of being “uncorrelated” only in the limit of orthogonality (θ = 90°). There is application to more than two coordinates is straightforward and may include both discrete and continuum contributions.« less

  9. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2011-09-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view.

  10. A UML model for the description of different brain-computer interface systems.

    PubMed

    Quitadamo, Lucia Rita; Abbafati, Manuel; Saggio, Giovanni; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi

    2008-01-01

    BCI research lacks a universal descriptive language among labs and a unique standard model for the description of BCI systems. This results in a serious problem in comparing performances of different BCI processes and in unifying tools and resources. In such a view we implemented a Unified Modeling Language (UML) model for the description virtually of any BCI protocol and we demonstrated that it can be successfully applied to the most common ones such as P300, mu-rhythms, SCP, SSVEP, fMRI. Finally we illustrated the advantages in utilizing a standard terminology for BCIs and how the same basic structure can be successfully adopted for the implementation of new systems.

  11. Mathematical modeling in realistic mathematics education

    NASA Astrophysics Data System (ADS)

    Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo

    2017-12-01

    The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.

  12. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  13. Be a good loser: A theoretical model for subordinate decision-making on bi-directional sex change in haremic fishes.

    PubMed

    Sawada, Kota; Yamaguchi, Sachi; Iwasa, Yoh

    2017-05-21

    Among animals living in groups with reproductive skew associated with a dominance hierarchy, subordinates may do best by using various alternative tactics. Sequential hermaphrodites or sex changers adopt a unique solution, that is, being the sex with weaker skew when they are small and subordinate, and changing sex when they become larger. In bi-directionally sex-changing fishes, although most are haremic and basically protogynous, subordinate males can change sex to being females. We study a mathematical model to examine when and why such reversed sex change is more adaptive than dispersal to take over another harem. We attempt to examine previously proposed hypotheses that the risk of dispersal and low density favor reversed sex change, and to specify an optimal decision-making strategy for subordinates. As a result, while the size-dependent conditional strategy in which smaller males tend to change sex is predicted, even large males are predicted to change sex under low density and/or high risk of dispersal, supporting both previous hypotheses. The importance of spatiotemporal variation of social and ecological conditions is also suggested. We discuss a unified framework to understand hermaphroditic and gonochoristic societies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Laser-Based Measuring System for Online Quality Control of Car Engine Block.

    PubMed

    Li, Xing-Qiang; Wang, Zhong; Fu, Lu-Hua

    2016-11-08

    For online quality control of car engine production, pneumatic measurement instrument plays an unshakeable role in measuring diameters inside engine block because of its portability and high-accuracy. To the limitation of its measuring principle, however, the working space between the pneumatic device and measured surface is too small to require manual operation. This lowers the measuring efficiency and becomes an obstacle to perform automatic measurement. In this article, a high-speed, automatic measuring system is proposed to take the place of pneumatic devices by using a laser-based measuring unit. The measuring unit is considered as a set of several measuring modules, where each of them acts like a single bore gauge and is made of four laser triangulation sensors (LTSs), which are installed on different positions and in opposite directions. The spatial relationship among these LTSs was calibrated before measurements. Sampling points from measured shaft holes can be collected by the measuring unit. A unified mathematical model was established for both calibration and measurement. Based on the established model, the relative pose between the measuring unit and measured workpiece does not impact the measuring accuracy. This frees the measuring unit from accurate positioning or adjustment, and makes it possible to realize fast and automatic measurement. The proposed system and method were finally validated by experiments.

  15. The emergence and policy implications of converging new technologies integrated from the nanoscale

    NASA Astrophysics Data System (ADS)

    Roco, M. C.

    2005-06-01

    Science based on the unified concepts on matter at the nanoscale provides a new foundation for knowledge creation, innovation, and technology integration. Convergent new technologies refers to the synergistic combination of nanotechnology, biotechnology, information technology and cognitive sciences (NBIC), each of which is currently progressing at a rapid rate, experiencing qualitative advancements, and interacting with the more established fields such as mathematics and environmental technologies (Roco & Bainbridge, 2002). It is expected that converging technologies will bring about tremendous improvements in transforming tools, new products and services, enable human personal abilities and social achievements, and reshape societal relationships. After a brief overview of the general implications of converging new technologies, this paper focuses on its effects on R&D policies and business models as part of changing societal relationships. These R&D policies will have implications on investments in research and industry, with the main goal of taking advantage of the transformative development of NBIC. Introduction of converging technologies must be done with respect of immediate concerns (privacy, toxicity of new materials, etc.) and longer-term concerns including human integrity, dignity and welfare. The efficient introduction and development of converging new technologies will require new organizations and business models, as well as solutions for preparing the economy, such as multifunctional research facilities, integrative technology platforms, and global risk governance.

  16. Mathematical Problem Solving Ability of Junior High School Students through Ang’s Framework for Mathematical Modelling Instruction

    NASA Astrophysics Data System (ADS)

    Fasni, N.; Turmudi, T.; Kusnandi, K.

    2017-09-01

    This research background of this research is the importance of student problem solving abilities. The purpose of this study is to find out whether there are differences in the ability to solve mathematical problems between students who have learned mathematics using Ang’s Framework for Mathematical Modelling Instruction (AFFMMI) and students who have learned using scientific approach (SA). The method used in this research is a quasi-experimental method with pretest-postest control group design. Data analysis of mathematical problem solving ability using Indepent Sample Test. The results showed that there was a difference in the ability to solve mathematical problems between students who received learning with Ang’s Framework for Mathematical Modelling Instruction and students who received learning with a scientific approach. AFFMMI focuses on mathematical modeling. This modeling allows students to solve problems. The use of AFFMMI is able to improve the solving ability.

  17. The Effect of Teacher Beliefs on Student Competence in Mathematical Modeling--An Intervention Study

    ERIC Educational Resources Information Center

    Mischo, Christoph; Maaß, Katja

    2013-01-01

    This paper presents an intervention study whose aim was to promote teacher beliefs about mathematics and learning mathematics and student competences in mathematical modeling. In the intervention, teachers received written curriculum materials about mathematical modeling. The concept underlying the materials was based on constructivist ideas and…

  18. Leaning on Mathematical Habits of Mind

    ERIC Educational Resources Information Center

    Sword, Sarah; Matsuura, Ryota; Cuoco, Al; Kang, Jane; Gates, Miriam

    2018-01-01

    Mathematical modeling has taken on increasing curricular importance in the past decade due in no small measure to the Common Core State Standards in Mathematics (CCSSM) identifying modeling as one of the Standards for Mathematical Practice (SMP 4, CCSSI 2010, p. 7). Although researchers have worked on mathematical modeling (Lesh and Doerr 2003;…

  19. V/STOL tilt rotor study. Volume 5: A mathematical model for real time flight simulation of the Bell model 301 tilt rotor research aircraft

    NASA Technical Reports Server (NTRS)

    Harendra, P. B.; Joglekar, M. J.; Gaffey, T. M.; Marr, R. L.

    1973-01-01

    A mathematical model for real-time flight simulation of a tilt rotor research aircraft was developed. The mathematical model was used to support the aircraft design, pilot training, and proof-of-concept aspects of the development program. The structure of the mathematical model is indicated by a block diagram. The mathematical model differs from that for a conventional fixed wing aircraft principally in the added requirement to represent the dynamics and aerodynamics of the rotors, the interaction of the rotor wake with the airframe, and the rotor control and drive systems. The constraints imposed on the mathematical model are defined.

  20. A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.

    PubMed

    Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng

    2018-05-01

    Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.

Top