Sample records for stochastic process algebra

  1. Deriving Differential Equations from Process Algebra Models in Reagent-Centric Style

    NASA Astrophysics Data System (ADS)

    Hillston, Jane; Duguid, Adam

    The reagent-centric style of modeling allows stochastic process algebra models of biochemical signaling pathways to be developed in an intuitive way. Furthermore, once constructed, the models are amenable to analysis by a number of different mathematical approaches including both stochastic simulation and coupled ordinary differential equations. In this chapter, we give a tutorial introduction to the reagent-centric style, in PEPA and Bio-PEPA, and the way in which such models can be used to generate systems of ordinary differential equations.

  2. Quantum stochastic calculus associated with quadratic quantum noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less

  3. SATA II - Stochastic Algebraic Topology and Applications

    DTIC Science & Technology

    2017-01-30

    AFRL-AFOSR-UK-TR-2017-0018 SATA II - Stochastic Algebraic Topology and Applications 150032 Robert Adler TECHNION ISRAEL INSTITUTE OF TECHNOLOGY Final...REPORT TYPE Final 3. DATES COVERED (From - To) 15 Dec 2014 to 14 Dec 2016 4. TITLE AND SUBTITLE SATA II - Stochastic Algebraic Topology and Applications ...has recently been submitted to AFOSR. 15. SUBJECT TERMS Network Theory, Sensor Technology, Mathematical Modeling, EOARD 16. SECURITY CLASSIFICATION OF

  4. Directed Abelian algebras and their application to stochastic models.

    PubMed

    Alcaraz, F C; Rittenberg, V

    2008-10-01

    With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma_(tau)=32 ). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma_(tau)=1.780+/-0.005 .

  5. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    PubMed

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  6. A representation of solution of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Tae; Jeon, Jong Woo

    2006-03-01

    We prove that the logarithm of the formal power series, obtained from a stochastic differential equation, is an element in the closure of the Lie algebra generated by vector fields being coefficients of equations. By using this result, we obtain a representation of the solution of stochastic differential equations in terms of Lie brackets and iterated Stratonovich integrals in the algebra of formal power series.

  7. Symmetries and stochastic symmetry breaking in multifractal geophysics: analysis and simulation with the help of the Lévy-Clifford algebra of cascade generators..

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2016-12-01

    Multifractal fields, whose definition is rather independent of their domain dimension, have opened a new approach of geophysics enabling to explore its spatial extension that is of prime importance as underlined by the expression "spatial chaos". However multifractals have been until recently restricted to be scalar valued, i.e. to one-dimensional codomains. This has prevented to deal with the key question of complex component interactions and their non trivial symmetries. We first emphasize that the Lie algebra of stochastic generators of cascade processes enables us to generalize multifractals to arbitrarily large codomains, e.g. flows of vector fields on large dimensional manifolds. In particular, we have recently investigated the neat example of stable Levy generators on Clifford algebra that have a number of seductive properties, e.g. universal statistical and robust algebra properties, both defining the basic symmetries of the corresponding fields (Schertzer and Tchiguirinskaia, 2015). These properties provide a convenient multifractal framework to study both the symmetries of the fields and how they stochastically break the symmetries of the underlying equations due to boundary conditions, large scale rotations and forcings. These developments should help us to answer to challenging questions such as the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013), to fully address the question of the limitations of quasi- geostrophic turbulence (Schertzer et al., 2012) and to explore the peculiar phenomenology of turbulent dynamics of the atmosphere or oceans that is neither two- or three-dimensional. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.8183. Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p.123127

  8. Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Ross, Brian; Imada, Janine

    Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.

  9. Algebraic, geometric, and stochastic aspects of genetic operators

    NASA Technical Reports Server (NTRS)

    Foo, N. Y.; Bosworth, J. L.

    1972-01-01

    Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.

  10. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    PubMed

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  11. Scaling in tournaments

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Redner, S.; Vazquez, F.

    2007-02-01

    We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.

  12. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library

    NASA Astrophysics Data System (ADS)

    Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid

    2018-02-01

    SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.

  13. Multifractal vector fields and stochastic Clifford algebra.

    PubMed

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2015-12-01

    In the mid 1980s, the development of multifractal concepts and techniques was an important breakthrough for complex system analysis and simulation, in particular, in turbulence and hydrology. Multifractals indeed aimed to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations or on simplified conceptual models. However, this development has been rather limited to deal with scalar fields, whereas most of the fields of interest are vector-valued or even manifold-valued. We show in this paper that the combination of stable Lévy processes with Clifford algebra is a good candidate to bridge up the present gap between theory and applications. We show that it indeed defines a convenient framework to generate multifractal vector fields, possibly multifractal manifold-valued fields, based on a few fundamental and complementary properties of Lévy processes and Clifford algebra. In particular, the vector structure of these algebra is much more tractable than the manifold structure of symmetry groups while the Lévy stability grants a given statistical universality.

  14. Langevin dynamics for vector variables driven by multiplicative white noise: A functional formalism

    NASA Astrophysics Data System (ADS)

    Moreno, Miguel Vera; Arenas, Zochil González; Barci, Daniel G.

    2015-04-01

    We discuss general multidimensional stochastic processes driven by a system of Langevin equations with multiplicative white noise. In particular, we address the problem of how time reversal diffusion processes are affected by the variety of conventions available to deal with stochastic integrals. We present a functional formalism to build up the generating functional of correlation functions without any type of discretization of the Langevin equations at any intermediate step. The generating functional is characterized by a functional integration over two sets of commuting variables, as well as Grassmann variables. In this representation, time reversal transformation became a linear transformation in the extended variables, simplifying in this way the complexity introduced by the mixture of prescriptions and the associated calculus rules. The stochastic calculus is codified in our formalism in the structure of the Grassmann algebra. We study some examples such as higher order derivative Langevin equations and the functional representation of the micromagnetic stochastic Landau-Lifshitz-Gilbert equation.

  15. Algebraic methods in system theory

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.; Willems, J. C.; Willsky, A. S.

    1975-01-01

    Investigations on problems of the type which arise in the control of switched electrical networks are reported. The main results concern the algebraic structure and stochastic aspects of these systems. Future reports will contain more detailed applications of these results to engineering studies.

  16. Climate and weather across scales: singularities and stochastic Levy-Clifford algebra

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2016-04-01

    There have been several attempts to understand and simulate the fluctuations of weather and climate across scales. Beyond mono/uni-scaling approaches (e.g. using spectral analysis), this was done with the help of multifractal techniques that aim to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations of these equations (Royer et al., 2008, Lovejoy and Schertzer, 2013). However, these techniques were limited to deal with scalar fields, instead of dealing directly with a system of complex interactions and non trivial symmetries. The latter is unfortunately indispensable to answer to the challenging question of being able to assess the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013) or to fully address the question of the relevance of quasi-geostrophic turbulence and to define an effective, fractal dimension of the atmospheric motions (Schertzer et al., 2012). In this talk, we present a plausible candidate based on the combination of Lévy stable processes and Clifford algebra. Together they combine stochastic and structural properties that are strongly universal. They therefore define with the help of a few physically meaningful parameters a wide class of stochastic symmetries, as well as high dimensional vector- or manifold-valued fields respecting these symmetries (Schertzer and Tchiguirinskaia, 2015). Lovejoy, S. & Schertzer, D., 2013. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge U.K. Cambridge Univeristy Press. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.81-83. Royer, J.F. et al., 2008. Multifractal analysis of the evolution of simulated precipitation over France in a climate scenario. C.R. Geoscience, 340(431-440). Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327-336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p.123127.

  17. Testing Transitivity of Preferences on Two-Alternative Forced Choice Data

    PubMed Central

    Regenwetter, Michel; Dana, Jason; Davis-Stober, Clintin P.

    2010-01-01

    As Duncan Luce and other prominent scholars have pointed out on several occasions, testing algebraic models against empirical data raises difficult conceptual, mathematical, and statistical challenges. Empirical data often result from statistical sampling processes, whereas algebraic theories are nonprobabilistic. Many probabilistic specifications lead to statistical boundary problems and are subject to nontrivial order constrained statistical inference. The present paper discusses Luce's challenge for a particularly prominent axiom: Transitivity. The axiom of transitivity is a central component in many algebraic theories of preference and choice. We offer the currently most complete solution to the challenge in the case of transitivity of binary preference on the theory side and two-alternative forced choice on the empirical side, explicitly for up to five, and implicitly for up to seven, choice alternatives. We also discuss the relationship between our proposed solution and weak stochastic transitivity. We recommend to abandon the latter as a model of transitive individual preferences. PMID:21833217

  18. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  19. Distributed Secure Coordinated Control for Multiagent Systems Under Strategic Attacks.

    PubMed

    Feng, Zhi; Wen, Guanghui; Hu, Guoqiang

    2017-05-01

    This paper studies a distributed secure consensus tracking control problem for multiagent systems subject to strategic cyber attacks modeled by a random Markov process. A hybrid stochastic secure control framework is established for designing a distributed secure control law such that mean-square exponential consensus tracking is achieved. A connectivity restoration mechanism is considered and the properties on attack frequency and attack length rate are investigated, respectively. Based on the solutions of an algebraic Riccati equation and an algebraic Riccati inequality, a procedure to select the control gains is provided and stability analysis is studied by using Lyapunov's method.. The effect of strategic attacks on discrete-time systems is also investigated. Finally, numerical examples are provided to illustrate the effectiveness of theoretical analysis.

  20. Modelling and performance analysis of clinical pathways using the stochastic process algebra PEPA.

    PubMed

    Yang, Xian; Han, Rui; Guo, Yike; Bradley, Jeremy; Cox, Benita; Dickinson, Robert; Kitney, Richard

    2012-01-01

    Hospitals nowadays have to serve numerous patients with limited medical staff and equipment while maintaining healthcare quality. Clinical pathway informatics is regarded as an efficient way to solve a series of hospital challenges. To date, conventional research lacks a mathematical model to describe clinical pathways. Existing vague descriptions cannot fully capture the complexities accurately in clinical pathways and hinders the effective management and further optimization of clinical pathways. Given this motivation, this paper presents a clinical pathway management platform, the Imperial Clinical Pathway Analyzer (ICPA). By extending the stochastic model performance evaluation process algebra (PEPA), ICPA introduces a clinical-pathway-specific model: clinical pathway PEPA (CPP). ICPA can simulate stochastic behaviours of a clinical pathway by extracting information from public clinical databases and other related documents using CPP. Thus, the performance of this clinical pathway, including its throughput, resource utilisation and passage time can be quantitatively analysed. A typical clinical pathway on stroke extracted from a UK hospital is used to illustrate the effectiveness of ICPA. Three application scenarios are tested using ICPA: 1) redundant resources are identified and removed, thus the number of patients being served is maintained with less cost; 2) the patient passage time is estimated, providing the likelihood that patients can leave hospital within a specific period; 3) the maximum number of input patients are found, helping hospitals to decide whether they can serve more patients with the existing resource allocation. ICPA is an effective platform for clinical pathway management: 1) ICPA can describe a variety of components (state, activity, resource and constraints) in a clinical pathway, thus facilitating the proper understanding of complexities involved in it; 2) ICPA supports the performance analysis of clinical pathway, thereby assisting hospitals to effectively manage time and resources in clinical pathway.

  1. Using process algebra to develop predator-prey models of within-host parasite dynamics.

    PubMed

    McCaig, Chris; Fenton, Andy; Graham, Andrea; Shankland, Carron; Norman, Rachel

    2013-07-21

    As a first approximation of immune-mediated within-host parasite dynamics we can consider the immune response as a predator, with the parasite as its prey. In the ecological literature of predator-prey interactions there are a number of different functional responses used to describe how a predator reproduces in response to consuming prey. Until recently most of the models of the immune system that have taken a predator-prey approach have used simple mass action dynamics to capture the interaction between the immune response and the parasite. More recently Fenton and Perkins (2010) employed three of the most commonly used prey-dependent functional response terms from the ecological literature. In this paper we make use of a technique from computing science, process algebra, to develop mathematical models. The novelty of the process algebra approach is to allow stochastic models of the population (parasite and immune cells) to be developed from rules of individual cell behaviour. By using this approach in which individual cellular behaviour is captured we have derived a ratio-dependent response similar to that seen in the previous models of immune-mediated parasite dynamics, confirming that, whilst this type of term is controversial in ecological predator-prey models, it is appropriate for models of the immune system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. On the statistical mechanics of the 2D stochastic Euler equation

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.

  3. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  4. Fock space, symbolic algebra, and analytical solutions for small stochastic systems.

    PubMed

    Santos, Fernando A N; Gadêlha, Hermes; Gaffney, Eamonn A

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  5. Algebraic methods for the solution of some linear matrix equations

    NASA Technical Reports Server (NTRS)

    Djaferis, T. E.; Mitter, S. K.

    1979-01-01

    The characterization of polynomials whose zeros lie in certain algebraic domains (and the unification of the ideas of Hermite and Lyapunov) is the basis for developing finite algorithms for the solution of linear matrix equations. Particular attention is given to equations PA + A'P = Q (the Lyapunov equation) and P - A'PA = Q the (discrete Lyapunov equation). The Lyapunov equation appears in several areas of control theory such as stability theory, optimal control (evaluation of quadratic integrals), stochastic control (evaluation of covariance matrices) and in the solution of the algebraic Riccati equation using Newton's method.

  6. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    DTIC Science & Technology

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  7. The exact fundamental solution for the Benes tracking problem

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2009-05-01

    The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.

  8. General Multidecision Theory: Hypothesis Testing and Changepoint Detection with Applications to Homeland Security

    DTIC Science & Technology

    2014-10-06

    to a subset Θ̃ of `-dimensional Euclidean space. The sub-σ-algebra Fn = FXn = σ(X n 1 ) of F is generated by the stochastic process X n 1 = (X1...developed asymptotic hypothesis testing theory is based on the SLLN and rates of convergence in the strong law for the LLR processes , specifically by...ξn to C. Write λn(θ, θ̃) = log dPnθ dPn θ̃ = ∑n k=1 log pθ(Xk|Xk−11 ) pθ̃(Xk|X k−1 1 ) for the log-likelihood ratio (LLR) process . Assume that there

  9. Momentum Maps and Stochastic Clebsch Action Principles

    NASA Astrophysics Data System (ADS)

    Cruzeiro, Ana Bela; Holm, Darryl D.; Ratiu, Tudor S.

    2018-01-01

    We derive stochastic differential equations whose solutions follow the flow of a stochastic nonlinear Lie algebra operation on a configuration manifold. For this purpose, we develop a stochastic Clebsch action principle, in which the noise couples to the phase space variables through a momentum map. This special coupling simplifies the structure of the resulting stochastic Hamilton equations for the momentum map. In particular, these stochastic Hamilton equations collectivize for Hamiltonians that depend only on the momentum map variable. The Stratonovich equations are derived from the Clebsch variational principle and then converted into Itô form. In comparing the Stratonovich and Itô forms of the stochastic dynamical equations governing the components of the momentum map, we find that the Itô contraction term turns out to be a double Poisson bracket. Finally, we present the stochastic Hamiltonian formulation of the collectivized momentum map dynamics and derive the corresponding Kolmogorov forward and backward equations.

  10. On orthogonality preserving quadratic stochastic operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhamedov, Farrukh; Taha, Muhammad Hafizuddin Mohd

    2015-05-15

    A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too.

  11. Derivation of rigorous conditions for high cell-type diversity by algebraic approach.

    PubMed

    Yoshida, Hiroshi; Anai, Hirokazu; Horimoto, Katsuhisa

    2007-01-01

    The development of a multicellular organism is a dynamic process. Starting with one or a few cells, the organism develops into different types of cells with distinct functions. We have constructed a simple model by considering the cell number increase and the cell-type order conservation, and have assessed conditions for cell-type diversity. This model is based on a stochastic Lindenmayer system with cell-to-cell interactions for three types of cells. In the present model, we have successfully derived complex but rigorous algebraic relations between the proliferation and transition rates for cell-type diversity by using a symbolic method: quantifier elimination (QE). Surprisingly, three modes for the proliferation and transition rates have emerged for large ratios of the initial cells to the developed cells. The three modes have revealed that the equality between the development rates for the highest cell-type diversity is reduced during the development process of multicellular organisms. Furthermore, we have found that the highest cell-type diversity originates from order conservation.

  12. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  13. Patterns of Stochastic Behavior in Dynamically Unstable High-Dimensional Biochemical Networks

    PubMed Central

    Rosenfeld, Simon

    2009-01-01

    The question of dynamical stability and stochastic behavior of large biochemical networks is discussed. It is argued that stringent conditions of asymptotic stability have very little chance to materialize in a multidimensional system described by the differential equations of chemical kinetics. The reason is that the criteria of asymptotic stability (Routh-Hurwitz, Lyapunov criteria, Feinberg’s Deficiency Zero theorem) would impose the limitations of very high algebraic order on the kinetic rates and stoichiometric coefficients, and there are no natural laws that would guarantee their unconditional validity. Highly nonlinear, dynamically unstable systems, however, are not necessarily doomed to collapse, as a simple Jacobian analysis would suggest. It is possible that their dynamics may assume the form of pseudo-random fluctuations quite similar to a shot noise, and, therefore, their behavior may be described in terms of Langevin and Fokker-Plank equations. We have shown by simulation that the resulting pseudo-stochastic processes obey the heavy-tailed Generalized Pareto Distribution with temporal sequence of pulses forming the set of constituent-specific Poisson processes. Being applied to intracellular dynamics, these properties are naturally associated with burstiness, a well documented phenomenon in the biology of gene expression. PMID:19838330

  14. Stochastic Games. I. Foundations,

    DTIC Science & Technology

    1982-04-01

    underpinning for the theory of stochastic games. Section 2 is a reworking of the Bevley- Kohlberg result integrated with Shapley’s; the "black magic" of... Kohlberg : The values of the r-discount game, and the stationary optimal strategies, have Puiseaux expansions. L.. 11" 6 3. More generally, consider an...1969). Introduction to Commu- tative Algebra. Reading, Mass.: Addison-Wesley. [3] Bewley, T. and E. Kohlberg (1976). "The Asymptotic Theory of

  15. An advanced environment for hybrid modeling of biological systems based on modelica.

    PubMed

    Pross, Sabrina; Bachmann, Bernhard

    2011-01-20

    Biological systems are often very complex so that an appropriate formalism is needed for modeling their behavior. Hybrid Petri Nets, consisting of time-discrete Petri Net elements as well as continuous ones, have proven to be ideal for this task. Therefore, a new Petri Net library was implemented based on the object-oriented modeling language Modelica which allows the modeling of discrete, stochastic and continuous Petri Net elements by differential, algebraic and discrete equations. An appropriate Modelica-tool performs the hybrid simulation with discrete events and the solution of continuous differential equations. A special sub-library contains so-called wrappers for specific reactions to simplify the modeling process. The Modelica-models can be connected to Simulink-models for parameter optimization, sensitivity analysis and stochastic simulation in Matlab. The present paper illustrates the implementation of the Petri Net component models, their usage within the modeling process and the coupling between the Modelica-tool Dymola and Matlab/Simulink. The application is demonstrated by modeling the metabolism of Chinese Hamster Ovary Cells.

  16. An Algebraic Construction of Duality Functions for the Stochastic {U_q( A_n^{(1)})} Vertex Model and Its Degenerations

    NASA Astrophysics Data System (ADS)

    Kuan, Jeffrey

    2018-03-01

    A recent paper (Kuniba in Nucl Phys B 913:248-277, 2016) introduced the stochastic U}_q(A_n^{(1)})} vertex model. The stochastic S-matrix is related to the R-matrix of the quantum group {U_q(A_n^{(1)})} by a gauge transformation. We will show that a certain function {D^+_{m intertwines with the transfer matrix and its space reversal. When interpreting the transfer matrix as the transition matrix of a discrete-time totally asymmetric particle system on the one-dimensional lattice Z , the function {D^+m} becomes a Markov duality function {Dm} which only depends on q and the vertical spin parameters μ_x. By considering degenerations in the spectral parameter, the duality results also hold on a finite lattice with closed boundary conditions, and for a continuous-time degeneration. This duality function had previously appeared in a multi-species ASEP(q, j) process (Kuan in A multi-species ASEP(q, j) and q-TAZRP with stochastic duality, 2017). The proof here uses that the R-matrix intertwines with the co-product, but does not explicitly use the Yang-Baxter equation. It will also be shown that the stochastic U}_q(A_n^{(1)})} is a multi-species version of a stochastic vertex model studied in Borodin and Petrov (Higher spin six vertex model and symmetric rational functions, 2016) and Corwin and Petrov (Commun Math Phys 343:651-700, 2016). This will be done by generalizing the fusion process of Corwin and Petrov (2016) and showing that it matches the fusion of Kulish and yu (Lett Math Phys 5:393-403, 1981) up to the gauge transformation. We also show, by direct computation, that the multi-species q-Hahn Boson process (which arises at a special value of the spectral parameter) also satisfies duality with respect to D_∞, generalizing the single-species result of Corwin (Int Math Res Not 2015:5577-5603, 2015).

  17. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  18. Horsetail matching: a flexible approach to optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  19. Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-02-01

    This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.

  20. Noise and Dissipation on Coadjoint Orbits

    NASA Astrophysics Data System (ADS)

    Arnaudon, Alexis; De Castro, Alex L.; Holm, Darryl D.

    2018-02-01

    We derive and study stochastic dissipative dynamics on coadjoint orbits by incorporating noise and dissipation into mechanical systems arising from the theory of reduction by symmetry, including a semidirect product extension. Random attractors are found for this general class of systems when the Lie algebra is semi-simple, provided the top Lyapunov exponent is positive. We study in details two canonical examples, the free rigid body and the heavy top, whose stochastic integrable reductions are found and numerical simulations of their random attractors are shown.

  1. Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Marcus, S. I.

    1975-01-01

    The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.

  2. The complexity of divisibility.

    PubMed

    Bausch, Johannes; Cubitt, Toby

    2016-09-01

    We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.

  3. A stochastic process approach of the drake equation parameters

    NASA Astrophysics Data System (ADS)

    Glade, Nicolas; Ballet, Pascal; Bastien, Olivier

    2012-04-01

    The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.

  4. Open Quantum Systems and Classical Trajectories

    NASA Astrophysics Data System (ADS)

    Rebolledo, Rolando

    2004-09-01

    A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)t ∈ ∝ + say, associated to a classical Markov process X = (Xt)t ∈ ∝ +. The structure (𝔄, T, X) unravels the quantum Markov semigroup { T} , providing a bridge between open quantum systems and classical stochastic processes.

  5. Hierarchy of forward-backward stochastic Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Ke, Yaling; Zhao, Yi

    2016-07-01

    Driven by the impetus to simulate quantum dynamics in photosynthetic complexes or even larger molecular aggregates, we have established a hierarchy of forward-backward stochastic Schrödinger equation in the light of stochastic unravelling of the symmetric part of the influence functional in the path-integral formalism of reduced density operator. The method is numerically exact and is suited for Debye-Drude spectral density, Ohmic spectral density with an algebraic or exponential cutoff, as well as discrete vibrational modes. The power of this method is verified by performing the calculations of time-dependent population differences in the valuable spin-boson model from zero to high temperatures. By simulating excitation energy transfer dynamics of the realistic full FMO trimer, some important features are revealed.

  6. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  7. On Volterra quadratic stochastic operators with continual state space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganikhodjaev, Nasir; Hamzah, Nur Zatul Akmar

    2015-05-15

    Let (X,F) be a measurable space, and S(X,F) be the set of all probability measures on (X,F) where X is a state space and F is σ - algebraon X. We consider a nonlinear transformation (quadratic stochastic operator) defined by (Vλ)(A) = ∫{sub X}∫{sub X}P(x,y,A)dλ(x)dλ(y), where P(x, y, A) is regarded as a function of two variables x and y with fixed A ∈ F . A quadratic stochastic operator V is called a regular, if for any initial measure the strong limit lim{sub n→∞} V{sup n }(λ) is exists. In this paper, we construct a family of quadratic stochastic operators defined on themore » segment X = [0,1] with Borel σ - algebra F on X , prove their regularity and show that the limit measure is a Dirac measure.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  9. Differential form representation of stochastic electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Haider, Michael; Russer, Johannes A.

    2017-09-01

    In this work, we revisit the theory of stochastic electromagnetic fields using exterior differential forms. We present a short overview as well as a brief introduction to the application of differential forms in electromagnetic theory. Within the framework of exterior calculus we derive equations for the second order moments, describing stochastic electromagnetic fields. Since the resulting objects are continuous quantities in space, a discretization scheme based on the Method of Moments (MoM) is introduced for numerical treatment. The MoM is applied in such a way, that the notation of exterior calculus is maintained while we still arrive at the same set of algebraic equations as obtained for the case of formulating the theory using the traditional notation of vector calculus. We conclude with an analytic calculation of the radiated electric field of two Hertzian dipole, excited by uncorrelated random currents.

  10. Two Different Template Replicators Coexisting in the Same Protocell: Stochastic Simulation of an Extended Chemoton Model

    PubMed Central

    Zachar, István; Fedor, Anna; Szathmáry, Eörs

    2011-01-01

    The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix. PMID:21818258

  11. Two different template replicators coexisting in the same protocell: stochastic simulation of an extended chemoton model.

    PubMed

    Zachar, István; Fedor, Anna; Szathmáry, Eörs

    2011-01-01

    The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.

  12. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  13. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  14. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE PAGES

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...

    2017-12-20

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  15. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  16. Learning coefficient of generalization error in Bayesian estimation and vandermonde matrix-type singularity.

    PubMed

    Aoyagi, Miki; Nagata, Kenji

    2012-06-01

    The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.

  17. Continuum Model for River Networks

    NASA Astrophysics Data System (ADS)

    Giacometti, Achille; Maritan, Amos; Banavar, Jayanth R.

    1995-07-01

    The effects of erosion, avalanching, and random precipitation are captured in a simple stochastic partial differential equation for modeling the evolution of river networks. Our model leads to a self-organized structured landscape and to abstraction and piracy of the smaller tributaries as the evolution proceeds. An algebraic distribution of the average basin areas and a power law relationship between the drainage basin area and the river length are found.

  18. An algebra of reversible computation.

    PubMed

    Wang, Yong

    2016-01-01

    We design an axiomatization for reversible computation called reversible ACP (RACP). It has four extendible modules: basic reversible processes algebra, algebra of reversible communicating processes, recursion and abstraction. Just like process algebra ACP in classical computing, RACP can be treated as an axiomatization foundation for reversible computation.

  19. An Information Theory Approach to Nonlinear, Nonequilibrium Thermodynamics

    NASA Astrophysics Data System (ADS)

    Rogers, David M.; Beck, Thomas L.; Rempe, Susan B.

    2011-10-01

    Using the problem of ion channel thermodynamics as an example, we illustrate the idea of building up complex thermodynamic models by successively adding physical information. We present a new formulation of information algebra that generalizes methods of both information theory and statistical mechanics. From this foundation we derive a theory for ion channel kinetics, identifying a nonequilibrium `process' free energy functional in addition to the well-known integrated work functionals. The Gibbs-Maxwell relation for the free energy functional is a Green-Kubo relation, applicable arbitrarily far from equilibrium, that captures the effect of non-local and time-dependent behavior from transient thermal and mechanical driving forces. Comparing the physical significance of the Lagrange multipliers to the canonical ensemble suggests definitions of nonequilibrium ensembles at constant capacitance or inductance in addition to constant resistance. Our result is that statistical mechanical descriptions derived from a few primitive algebraic operations on information can be used to create experimentally-relevant and computable models. By construction, these models may use information from more detailed atomistic simulations. Two surprising consequences to be explored in further work are that (in)distinguishability factors are automatically predicted from the problem formulation and that a direct analogue of the second law for thermodynamic entropy production is found by considering information loss in stochastic processes. The information loss identifies a novel contribution from the instantaneous information entropy that ensures non-negative loss.

  20. SATA Stochastic Algebraic Topology and Applications

    DTIC Science & Technology

    2017-01-23

    Harris et al. Selective sampling after solving a convex problem". arXiv:1609.05609 [ math , stat] (Sept. 2016). arXiv: 1609.05609. 13. Baryshnikov...Functions, Adv. Math . 245, 573-586, 2014. 15. Y. Baryshnikov, Liberzon, Daniel,Robust stability conditions for switched linear systems: Commutator bounds...Consistency via Kernel Estimation, arXiv:1407.5272 [ math , stat] (July 2014) arXiv: 1407.5272. to appear in Bernoulli 18. O.Bobrowski and S.Weinberger

  1. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  2. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  3. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  4. Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.

    PubMed

    Grossi, Giuliano

    2009-08-01

    Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.

  5. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Yu, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Miao, Zibo, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Amini, Hadis, E-mail: nhamini@stanford.edu

    Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, whichmore » extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.« less

  7. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  8. Assessing Algebraic Solving Ability: A Theoretical Framework

    ERIC Educational Resources Information Center

    Lian, Lim Hooi; Yew, Wun Thiam

    2012-01-01

    Algebraic solving ability had been discussed by many educators and researchers. There exists no definite definition for algebraic solving ability as it can be viewed from different perspectives. In this paper, the nature of algebraic solving ability in terms of algebraic processes that demonstrate the ability in solving algebraic problem is…

  9. Capitalizing on Basic Brain Processes in Developmental Algebra--Part 2

    ERIC Educational Resources Information Center

    Laughbaum, Edward D.

    2011-01-01

    Basic brain function is not a mystery. Given that neuroscientists understand its basic functioning processes, one wonders what their research suggests to teachers of developmental algebra. What if we knew how to teach so as to improve understanding of the algebra taught to developmental algebra students? What if we knew how the brain processes…

  10. Capitalizing on Basic Brain Processes in Developmental Algebra--Part One

    ERIC Educational Resources Information Center

    Laughbaum, Edward D.

    2011-01-01

    Basic brain function is not a mystery. Given that neuroscientists understand the brain's basic functioning processes, one wonders what their research suggests to teachers of developmental algebra. What if we knew how to teach so as to improve understanding of the algebra taught to developmental algebra students? What if we knew how the brain…

  11. Control strategies for a stochastic model of host-parasite interaction in a seasonal environment.

    PubMed

    Gómez-Corral, A; López García, M

    2014-08-07

    We examine a nonlinear stochastic model for the parasite load of a single host over a predetermined time interval. We use nonhomogeneous Poisson processes to model the acquisition of parasites, the parasite-induced host mortality, the natural (no parasite-induced) host mortality, and the reproduction and death of parasites within the host. Algebraic results are first obtained on the age-dependent distribution of the number of parasites infesting the host at an arbitrary time t. The interest is in control strategies based on isolation of the host and the use of an anthelmintic at a certain intervention instant t0. This means that the host is free living in a seasonal environment, and it is transferred to a uninfected area at age t0. In the uninfected area, the host does not acquire new parasites, undergoes a treatment to decrease the parasite load, and its natural and parasite-induced mortality are altered. For a suitable selection of t0, we present two control criteria that appropriately balance effectiveness and cost of intervention. Our approach is based on simple probabilistic principles, and it allows us to examine seasonal fluctuations of gastrointestinal nematode burden in growing lambs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  13. Tomographic reconstruction of atmospheric turbulence with the use of time-dependent stochastic inversion.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M

    2007-09-01

    Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.

  14. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    PubMed

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.

  15. An Arithmetic-Algebraic Work Space for the Promotion of Arithmetic and Algebraic Thinking: Triangular Numbers

    ERIC Educational Resources Information Center

    Hitt, Fernando; Saboya, Mireille; Cortés Zavala, Carlos

    2016-01-01

    This paper presents an experiment that attempts to mobilise an arithmetic-algebraic way of thinking in order to articulate between arithmetic thinking and the early algebraic thinking, which is considered a prelude to algebraic thinking. In the process of building this latter way of thinking, researchers analysed pupils' spontaneous production…

  16. Students’ Algebraic Thinking Process in Context of Point and Line Properties

    NASA Astrophysics Data System (ADS)

    Nurrahmi, H.; Suryadi, D.; Fatimah, S.

    2017-09-01

    Learning of schools algebra is limited to symbols and operating procedures, so students are able to work on problems that only require the ability to operate symbols but unable to generalize a pattern as one of part of algebraic thinking. The purpose of this study is to create a didactic design that facilitates students to do algebraic thinking process through the generalization of patterns, especially in the context of the property of point and line. This study used qualitative method and includes Didactical Design Research (DDR). The result is students are able to make factual, contextual, and symbolic generalization. This happen because the generalization arises based on facts on local terms, then the generalization produced an algebraic formula that was described in the context and perspective of each student. After that, the formula uses the algebraic letter symbol from the symbol t hat uses the students’ language. It can be concluded that the design has facilitated students to do algebraic thinking process through the generalization of patterns, especially in the context of property of the point and line. The impact of this study is this design can use as one of material teaching alternative in learning of school algebra.

  17. A Electro-Optical Image Algebra Processing System for Automatic Target Recognition

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick Cyrus

    The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.

  18. Intrinsic noise analyzer: a software package for the exploration of stochastic biochemical kinetics using the system size expansion.

    PubMed

    Thomas, Philipp; Matuschek, Hannes; Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license.

  19. Intrinsic Noise Analyzer: A Software Package for the Exploration of Stochastic Biochemical Kinetics Using the System Size Expansion

    PubMed Central

    Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen’s system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA’s performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license. PMID:22723865

  20. Rupture or Continuity: The Arithmetico-Algebraic Thinking as an Alternative in a Modelling Process in a Paper and Pencil and Technology Environment

    ERIC Educational Resources Information Center

    Hitt, Fernando; Saboya, Mireille; Zavala, Carlos Cortés

    2017-01-01

    Part of the research community that has followed the Early Algebra paradigm is currently delimiting the differences between arithmetic thinking and algebraic thinking. This trend could prevent new research approaches to the problem of learning algebra, hiding the importance of considering an arithmetico-algebraic thinking, a new approach which…

  1. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  2. First-passage dynamics of linear stochastic interface models: weak-noise theory and influence of boundary conditions

    NASA Astrophysics Data System (ADS)

    Gross, Markus

    2018-03-01

    We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.

  3. Visual Salience of Algebraic Transformations

    ERIC Educational Resources Information Center

    Kirshner, David; Awtry, Thomas

    2004-01-01

    Information processing researchers have assumed that algebra symbol skills depend on mastery of the abstract rules presented in the curriculum (Matz, 1980; Sleeman, 1986). Thus, students' ubiquitous algebra errors have been taken as indicating the need to embed algebra in rich contextual settings (Kaput, 1995; National Council of Teachers of…

  4. A Scalable Computational Framework for Establishing Long-Term Behavior of Stochastic Reaction Networks

    PubMed Central

    Khammash, Mustafa

    2014-01-01

    Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191

  5. Principles of Stagewise Separation Process Calculations: A Simple Algebraic Approach Using Solvent Extraction.

    ERIC Educational Resources Information Center

    Crittenden, Barry D.

    1991-01-01

    A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…

  6. Meanings Given to Algebraic Symbolism in Problem-Posing

    ERIC Educational Resources Information Center

    Cañadas, María C.; Molina, Marta; del Río, Aurora

    2018-01-01

    Some errors in the learning of algebra suggest that students might have difficulties giving meaning to algebraic symbolism. In this paper, we use problem posing to analyze the students' capacity to assign meaning to algebraic symbolism and the difficulties that students encounter in this process, depending on the characteristics of the algebraic…

  7. Students’ Algebraic Reasonsing In Solving Mathematical Problems With Adversity Quotient

    NASA Astrophysics Data System (ADS)

    Aryani, F.; Amin, S. M.; Sulaiman, R.

    2018-01-01

    Algebraic reasoning is a process in which students generalize mathematical ideas from a set of particular instances and express them in increasingly formal and age-appropriate ways. Using problem solving approach to develop algebraic reasoning of mathematics may enhace the long-term learning trajectory of the majority students. The purpose of this research was to describe the algebraic reasoning of quitter, camper, and climber junior high school students in solving mathematical problems. This research used qualitative descriptive method. Subjects were determined by purposive sampling. The technique of collecting data was done by task-based interviews.The results showed that the algebraic reasoning of three students in the process of pattern seeking by identifying the things that are known and asked in a similar way. But three students found the elements of pattern recognition in different ways or method. So, they are generalize the problem of pattern formation with different ways. The study of algebraic reasoning and problem solving can be a learning paradigm in the improve students’ knowledge and skills in algebra work. The goal is to help students’ improve academic competence, develop algebraic reasoning in problem solving.

  8. Mastering algebra retrains the visual system to perceive hierarchical structure in equations.

    PubMed

    Marghetis, Tyler; Landy, David; Goldstone, Robert L

    2016-01-01

    Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.

  9. Transient aging in fractional Brownian and Langevin-equation motion.

    PubMed

    Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf

    2013-12-01

    Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.

  10. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Topological order and thermal equilibrium in polariton condensates

    NASA Astrophysics Data System (ADS)

    Caputo, Davide; Ballarini, Dario; Dagvadorj, Galbadrakh; Sánchez Muñoz, Carlos; de Giorgi, Milena; Dominici, Lorenzo; West, Kenneth; Pfeiffer, Loren N.; Gigli, Giuseppe; Laussy, Fabrice P.; Szymańska, Marzena H.; Sanvitto, Daniele

    2018-02-01

    The Berezinskii-Kosterlitz-Thouless phase transition from a disordered to a quasi-ordered state, mediated by the proliferation of topological defects in two dimensions, governs seemingly remote physical systems ranging from liquid helium, ultracold atoms and superconducting thin films to ensembles of spins. Here we observe such a transition in a short-lived gas of exciton-polaritons, bosonic light-matter particles in semiconductor microcavities. The observed quasi-ordered phase, characteristic for an equilibrium two-dimensional bosonic gas, with a decay of coherence in both spatial and temporal domains with the same algebraic exponent, is reproduced with numerical solutions of stochastic dynamics, proving that the mechanism of pairing of the topological defects (vortices) is responsible for the transition to the algebraic order. This is made possible thanks to long polariton lifetimes in high-quality samples and in a reservoir-free region. Our results show that the joint measurement of coherence both in space and time is required to characterize driven-dissipative phase transitions and enable the investigation of topological ordering in open systems.

  12. Contractions and deformations of quasiclassical Lie algebras preserving a nondegenerate quadratic Casimir operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campoamor-Stursberg, R., E-mail: rutwig@mat.ucm.e

    2008-05-15

    By means of contractions of Lie algebras, we obtain new classes of indecomposable quasiclassical Lie algebras that satisfy the Yang-Baxter equations in its reformulation in terms of triple products. These algebras are shown to arise naturally from noncompact real simple algebras with nonsimple complexification, where we impose that a nondegenerate quadratic Casimir operator is preserved by the limiting process. We further consider the converse problem and obtain sufficient conditions on integrable cocycles of quasiclassical Lie algebras in order to preserve nondegenerate quadratic Casimir operators by the associated linear deformations.

  13. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  14. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  15. Development of a Computerized Adaptive Testing for Diagnosing the Cognitive Process of Grade 7 Students in Learning Algebra, Using Multidimensional Item Response Theory

    ERIC Educational Resources Information Center

    Senarat, Somprasong; Tayraukham, Sombat; Piyapimonsit, Chatsiri; Tongkhambanjong, Sakesan

    2013-01-01

    The purpose of this research is to develop a multidimensional computerized adaptive test for diagnosing the cognitive process of grade 7 students in learning algebra by applying multidimensional item response theory. The research is divided into 4 steps: 1) the development of item bank of algebra, 2) the development of the multidimensional…

  16. A Process Algebra Approach to Quantum Electrodynamics

    NASA Astrophysics Data System (ADS)

    Sulis, William

    2017-12-01

    The process algebra program is directed towards developing a realist model of quantum mechanics free of paradoxes, divergences and conceptual confusions. From this perspective, fundamental phenomena are viewed as emerging from primitive informational elements generated by processes. The process algebra has been shown to successfully reproduce scalar non-relativistic quantum mechanics (NRQM) without the usual paradoxes and dualities. NRQM appears as an effective theory which emerges under specific asymptotic limits. Space-time, scalar particle wave functions and the Born rule are all emergent in this framework. In this paper, the process algebra model is reviewed, extended to the relativistic setting, and then applied to the problem of electrodynamics. A semiclassical version is presented in which a Minkowski-like space-time emerges as well as a vector potential that is discrete and photon-like at small scales and near-continuous and wave-like at large scales. QED is viewed as an effective theory at small scales while Maxwell theory becomes an effective theory at large scales. The process algebra version of quantum electrodynamics is intuitive and realist, free from divergences and eliminates the distinction between particle, field and wave. Computations are carried out using the configuration space process covering map, although the connection to second quantization has not been fully explored.

  17. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  18. On the coherent behavior of pancreatic beta cell clusters

    NASA Astrophysics Data System (ADS)

    Loppini, Alessandro; Capolupo, Antonio; Cherubini, Christian; Gizzi, Alessio; Bertolaso, Marta; Filippi, Simonetta; Vitiello, Giuseppe

    2014-09-01

    Beta cells in pancreas represent an example of coupled biological oscillators which via communication pathways, are able to synchronize their electrical activity, giving rise to pulsatile insulin release. In this work we numerically analyze scale free self-similarity features of membrane voltage signal power density spectrum, through a stochastic dynamical model for beta cells in the islets of Langerhans fine tuned on mouse experimental data. Adopting the algebraic approach of coherent state formalism, we show how coherent molecular domains can arise from proper functional conditions leading to a parallelism with “phase transition” phenomena of field theory.

  19. Approximate dynamic programming for optimal stationary control with control-dependent noise.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2011-12-01

    This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.

  20. Contributions of Domain-General Cognitive Resources and Different Forms of Arithmetic Development to Pre-Algebraic Knowledge

    PubMed Central

    Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.

    2012-01-01

    The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n=279; mean age=7.59 yrs) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems at start of 2nd grade and on calculations, word problems, and pre-algebraic knowledge at end of 3rd grade. Multilevel path analysis, controlling for instructional effects associated with the sequence of classrooms in which students were nested across grades 2–3, indicated arithmetic calculations and word problems are foundational to pre-algebraic knowledge. Also, results revealed direct contributions of nonverbal reasoning and oral language to pre-algebraic knowledge, beyond indirect effects that are mediated via arithmetic calculations and word problems. By contrast, attentive behavior, phonological processing, and processing speed contributed to pre-algebraic knowledge only indirectly via arithmetic calculations and word problems. PMID:22409764

  1. Fractional Stochastic Differential Equations Satisfying Fluctuation-Dissipation Theorem

    NASA Astrophysics Data System (ADS)

    Li, Lei; Liu, Jian-Guo; Lu, Jianfeng

    2017-10-01

    We propose in this work a fractional stochastic differential equation (FSDE) model consistent with the over-damped limit of the generalized Langevin equation model. As a result of the `fluctuation-dissipation theorem', the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives, and this FSDE model should be understood in an integral form. We establish the existence of strong solutions for such equations and discuss the ergodicity and convergence to Gibbs measure. In the linear forcing regime, we show rigorously the algebraic convergence to Gibbs measure when the `fluctuation-dissipation theorem' is satisfied, and this verifies that satisfying `fluctuation-dissipation theorem' indeed leads to the correct physical behavior. We further discuss possible approaches to analyze the ergodicity and convergence to Gibbs measure in the nonlinear forcing regime, while leave the rigorous analysis for future works. The FSDE model proposed is suitable for systems in contact with heat bath with power-law kernel and subdiffusion behaviors.

  2. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  3. Literal algebra for satellite dynamics. [perturbation analysis

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1975-01-01

    A description of the rather general class of operations available is given and the operations are related to problems in satellite dynamics. The implementation of an algebra processor is discussed. The four main categories of symbol processors are related to list processing, string manipulation, symbol manipulation, and formula manipulation. Fundamental required operations for an algebra processor are considered. It is pointed out that algebra programs have been used for a number of problems in celestial mechanics with great success. The advantage of computer algebra is its accuracy and speed.

  4. Microstructural Quantification, Property Prediction, and Stochastic Reconstruction of Heterogeneous Materials Using Limited X-Ray Tomography Data

    NASA Astrophysics Data System (ADS)

    Li, Hechao

    An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.

  5. Software Development Of XML Parser Based On Algebraic Tools

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2011-12-01

    In this paper, is presented one software development and implementation of an algebraic method for XML data processing, which accelerates XML parsing process. Therefore, the proposed in this article nontraditional approach for fast XML navigation with algebraic tools contributes to advanced efforts in the making of an easier user-friendly API for XML transformations. Here the proposed software for XML documents processing (parser) is easy to use and can manage files with strictly defined data structure. The purpose of the presented algorithm is to offer a new approach for search and restructuring hierarchical XML data. This approach permits fast XML documents processing, using algebraic model developed in details in previous works of the same authors. So proposed parsing mechanism is easy accessible to the web consumer who is able to control XML file processing, to search different elements (tags) in it, to delete and to add a new XML content as well. The presented various tests show higher rapidity and low consumption of resources in comparison with some existing commercial parsers.

  6. Individual Differences in Algebraic Cognition: Relation to the Approximate Number and Sematic Memory Systems

    PubMed Central

    Geary, David C.; Hoard, Mary K.; Nugent, Lara; Rouder, Jeffrey N.

    2015-01-01

    The relation between performance on measures of algebraic cognition and acuity of the approximate number system (ANS) and memory for addition facts was assessed for 171 (92 girls) 9th graders, controlling parental education, sex, reading achievement, speed of numeral processing, fluency of symbolic number processing, intelligence, and the central executive component of working memory. The algebraic tasks assessed accuracy in placing x,y pairs in the coordinate plane, speed and accuracy of expression evaluation, and schema memory for algebra equations. ANS acuity was related to accuracy of placements in the coordinate plane and expression evaluation, but not schema memory. Frequency of fact-retrieval errors was related to schema memory but not coordinate plane or expression evaluation accuracy. The results suggest the ANS may contribute to or is influenced by spatial-numerical and numerical only quantity judgments in algebraic contexts, whereas difficulties in committing addition facts to long-term memory may presage slow formation of memories for the basic structure of algebra equations. More generally, the results suggest different brain and cognitive systems are engaged during the learning of different components of algebraic competence, controlling demographic and domain general abilities. PMID:26255604

  7. Teaching Linear Algebra: Must the Fog Always Roll In?

    ERIC Educational Resources Information Center

    Carlson, David

    1993-01-01

    Proposes methods to teach the more difficult concepts of linear algebra. Examines features of the Linear Algebra Curriculum Study Group Core Syllabus, and presents problems from the core syllabus that utilize the mathematical process skills of making conjectures, proving the results, and communicating the results to colleagues. Presents five…

  8. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map.

    PubMed

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S

    2008-04-11

    A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.

  9. Development process of in-service training intended for teachers to perform teaching of mathematics with computer algebra systems

    NASA Astrophysics Data System (ADS)

    Ardıç, Mehmet Alper; Işleyen, Tevfik

    2018-01-01

    In this study, we deal with the development process of in-service training activities designed in order for mathematics teachers of secondary education to realize teaching of mathematics, utilizing computer algebra systems. In addition, the results obtained from the researches carried out during and after the in-service training were summarized. Last section focuses on suggestions any teacher can use to carry out activities aimed at using computer algebra systems in teaching environments.

  10. Measurement theory in local quantum physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okamura, Kazuya, E-mail: okamura@math.cm.is.nagoya-u.ac.jp; Ozawa, Masanao, E-mail: ozawa@is.nagoya-u.ac.jp

    In this paper, we aim to establish foundations of measurement theory in local quantum physics. For this purpose, we discuss a representation theory of completely positive (CP) instruments on arbitrary von Neumann algebras. We introduce a condition called the normal extension property (NEP) and establish a one-to-one correspondence between CP instruments with the NEP and statistical equivalence classes of measuring processes. We show that every CP instrument on an atomic von Neumann algebra has the NEP, extending the well-known result for type I factors. Moreover, we show that every CP instrument on an injective von Neumann algebra is approximated bymore » CP instruments with the NEP. The concept of posterior states is also discussed to show that the NEP is equivalent to the existence of a strongly measurable family of posterior states for every normal state. Two examples of CP instruments without the NEP are obtained from this result. It is thus concluded that in local quantum physics not every CP instrument represents a measuring process, but in most of physically relevant cases every CP instrument can be realized by a measuring process within arbitrary error limits, as every approximately finite dimensional von Neumann algebra on a separable Hilbert space is injective. To conclude the paper, the concept of local measurement in algebraic quantum field theory is examined in our framework. In the setting of the Doplicher-Haag-Roberts and Doplicher-Roberts theory describing local excitations, we show that an instrument on a local algebra can be extended to a local instrument on the global algebra if and only if it is a CP instrument with the NEP, provided that the split property holds for the net of local algebras.« less

  11. Capitalizing on Basic Brain Processes in Developmental Algebra--Part 3

    ERIC Educational Resources Information Center

    Laughbaum, Edward D.

    2011-01-01

    In Part Three, the author reviews the basic ideas presented in Parts One and Two while arguing why the traditional equation-solving developmental algebra curricula is not a good choice for implementing neural response strategies presented in the first two parts. He continues by showing that the developmental algebra student audience is simply…

  12. Calif. Laws Shift Gears on Algebra, Textbooks

    ERIC Educational Resources Information Center

    Robelen, Erik W.

    2012-01-01

    New laws in California have set the state on a course for some potentially significant changes to the curriculum, including a measure that revisits the matter of teaching Algebra 1 in 8th grade and another that revamps the state's textbook-adoption process and hands districts greater leeway in choosing instructional materials. The algebra-related…

  13. Spectral simplicity of apparent complexity. I. The nondiagonalizable metadynamics of prediction

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question—correlation, predictability, predictive cost, observer synchronization, and the like—induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the spectral projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II [P. M. Riechers and J. P. Crutchfield, Chaos 28, 033116 (2018)], to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.

  14. A mechanized process algebra for verification of device synchronization protocols

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas

    1992-01-01

    We describe the formalization of a process algebra based on CCS within the Higher Order Logic (HOL) theorem-proving system. The representation of four types of device interactions and a correctness proof of the communication between a microprocessor and MMU is presented.

  15. The Role of Cognitive Processes, Foundational Math Skill, and Calculation Accuracy and Fluency in Word-Problem Solving versus Pre-Algebraic Knowledge

    PubMed Central

    Fuchs, Lynn S.; Gilbert, Jennifer K.; Powell, Sarah R.; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Tolar, Tammy D.

    2016-01-01

    The purpose of this study was to examine child-level pathways in development of pre-algebraic knowledge versus word-problem solving, while evaluating the contribution of calculation accuracy and fluency as mediators of foundational skills/processes. Children (n = 962; mean 7.60 years) were assessed on general cognitive processes and early calculation, word-problem, and number knowledge at start of grade 2; calculation accuracy and calculation fluency at end of grade 2; and pre-algebraic knowledge and word-problem solving at end of grade 4. Important similarities in pathways were identified, but path analysis also indicated that language comprehension is more critical for later word-problem solving than pre-algebraic knowledge. We conclude that pathways in development of these forms of 4th-grade mathematics performance are more alike than different, but demonstrate the need to fine-tune instruction for strands of the mathematics curriculum in ways that address individual students’ foundational mathematics skills or cognitive processes. PMID:27786534

  16. Algebraic signal processing theory: 2-D spatial hexagonal lattice.

    PubMed

    Pünschel, Markus; Rötteler, Martin

    2007-06-01

    We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.

  17. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  18. Thought beyond language: neural dissociation of algebra and natural language.

    PubMed

    Monti, Martin M; Parsons, Lawrence M; Osherson, Daniel N

    2012-08-01

    A central question in cognitive science is whether natural language provides combinatorial operations that are essential to diverse domains of thought. In the study reported here, we addressed this issue by examining the role of linguistic mechanisms in forging the hierarchical structures of algebra. In a 3-T functional MRI experiment, we showed that processing of the syntax-like operations of algebra does not rely on the neural mechanisms of natural language. Our findings indicate that processing the syntax of language elicits the known substrate of linguistic competence, whereas algebraic operations recruit bilateral parietal brain regions previously implicated in the representation of magnitude. This double dissociation argues against the view that language provides the structure of thought across all cognitive domains.

  19. Selecting Students for Pre-Algebra: Examination of the Relative Utility of the Anchorage Pre-Algebra Screening Tests and the State of Alaska Standards Based Benchmark 2 Mathematics Study. An Examination of Consequential Validity and Recommendation.

    ERIC Educational Resources Information Center

    Fenton, Ray

    This study examined the relative efficacy of the Anchorage (Alaska) Pre-Algebra Test and the State of Alaska Benchmark in 2 Math examination as tools used in the process of recommending grade 6 students for grade 7 Pre-Algebra placement. The consequential validity of the tests is explored in the context of class placements and grades earned. The…

  20. Individual differences in algebraic cognition: Relation to the approximate number and semantic memory systems.

    PubMed

    Geary, David C; Hoard, Mary K; Nugent, Lara; Rouder, Jeffrey N

    2015-12-01

    The relation between performance on measures of algebraic cognition and acuity of the approximate number system (ANS) and memory for addition facts was assessed for 171 ninth graders (92 girls) while controlling for parental education, sex, reading achievement, speed of numeral processing, fluency of symbolic number processing, intelligence, and the central executive component of working memory. The algebraic tasks assessed accuracy in placing x,y pairs in the coordinate plane, speed and accuracy of expression evaluation, and schema memory for algebra equations. ANS acuity was related to accuracy of placements in the coordinate plane and expression evaluation but not to schema memory. Frequency of fact retrieval errors was related to schema memory but not to coordinate plane or expression evaluation accuracy. The results suggest that the ANS may contribute to or be influenced by spatial-numerical and numerical-only quantity judgments in algebraic contexts, whereas difficulties in committing addition facts to long-term memory may presage slow formation of memories for the basic structure of algebra equations. More generally, the results suggest that different brain and cognitive systems are engaged during the learning of different components of algebraic competence while controlling for demographic and domain general abilities. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Noise limitations in optical linear algebra processors.

    PubMed

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  2. Stochastic models for inferring genetic regulation from microarray gene expression data.

    PubMed

    Tian, Tianhai

    2010-03-01

    Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Entanglement classification with matrix product states

    NASA Astrophysics Data System (ADS)

    Sanz, M.; Egusquiza, I. L.; di Candia, R.; Saberi, H.; Lamata, L.; Solano, E.

    2016-07-01

    We propose an entanglement classification for symmetric quantum states based on their diagonal matrix-product-state (MPS) representation. The proposed classification, which preserves the stochastic local operation assisted with classical communication (SLOCC) criterion, relates entanglement families to the interaction length of Hamiltonians. In this manner, we establish a connection between entanglement classification and condensed matter models from a quantum information perspective. Moreover, we introduce a scalable nesting property for the proposed entanglement classification, in which the families for N parties carry over to the N + 1 case. Finally, using techniques from algebraic geometry, we prove that the minimal nontrivial interaction length n for any symmetric state is bounded by .

  4. Implementing Computer Algebra Enabled Questions for the Assessment and Learning of Mathematics

    ERIC Educational Resources Information Center

    Sangwin, Christopher J.; Naismith, Laura

    2008-01-01

    We present principles for the design of an online system to support computer algebra enabled questions for use within the teaching and learning of mathematics in higher education. The introduction of a computer algebra system (CAS) into a computer aided assessment (CAA) system affords sophisticated response processing of student provided answers.…

  5. Syntax and Meaning as Sensuous, Visual, Historical Forms of Algebraic Thinking

    ERIC Educational Resources Information Center

    Radford, Luis; Puig, Luis

    2007-01-01

    Before the advent of symbolism, i.e. before the end of the 16th Century, algebraic calculations were made using natural language. Through a kind of metaphorical process, a few terms from everyday life (e.g. thing, root) acquired a technical mathematical status and constituted the specialized language of algebra. The introduction of letters and…

  6. Strategies for Solving Fraction Tasks and Their Link to Algebraic Thinking

    ERIC Educational Resources Information Center

    Pearn, Catherine; Stephens, Max

    2015-01-01

    Many researchers argue that a deep understanding of fractions is important for a successful transition to algebra. Teaching, especially in the middle years, needs to focus specifically on those areas of fraction knowledge and operations that support subsequent solution processes for algebraic equations. This paper focuses on the results of Year 6…

  7. Designing Cognitively Diagnostic Assessment for Algebraic Content Knowledge and Thinking Skills

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2018-01-01

    This study explored a diagnostic assessment method that emphasized the cognitive process of algebra learning. The study utilized a design and a theory-driven model to examine the content knowledge. Using the theory driven model, the thinking skills of algebra learning was also examined. A Bayesian network model was applied to represent the theory…

  8. Cox process representation and inference for stochastic reaction-diffusion processes

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2016-05-01

    Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.

  9. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  10. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  11. Window of Opportunity? Adolescence, Music, and Algebra

    ERIC Educational Resources Information Center

    Helmrich, Barbara H.

    2010-01-01

    Research has suggested that musicians process music in the same cortical regions that adolescents process algebra. An early adolescence synaptogenesis might present a window of opportunity during middle school for music to create and strengthen enduring neural connections in those regions. Six school districts across Maryland provided scores from…

  12. An algebra of discrete event processes

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Meyer, George

    1991-01-01

    This report deals with an algebraic framework for modeling and control of discrete event processes. The report consists of two parts. The first part is introductory, and consists of a tutorial survey of the theory of concurrency in the spirit of Hoare's CSP, and an examination of the suitability of such an algebraic framework for dealing with various aspects of discrete event control. To this end a new concurrency operator is introduced and it is shown how the resulting framework can be applied. It is further shown that a suitable theory that deals with the new concurrency operator must be developed. In the second part of the report the formal algebra of discrete event control is developed. At the present time the second part of the report is still an incomplete and occasionally tentative working paper.

  13. Stochastic Community Assembly: Does It Matter in Microbial Ecology?

    PubMed

    Zhou, Jizhong; Ning, Daliang

    2017-12-01

    Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.

  14. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  15. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  16. High-performance image processing architecture

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-04-01

    The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  17. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  18. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  19. Temporal mapping and analysis

    NASA Technical Reports Server (NTRS)

    O'Hara, Charles G. (Inventor); Shrestha, Bijay (Inventor); Vijayaraj, Veeraraghavan (Inventor); Mali, Preeti (Inventor)

    2011-01-01

    A compositing process for selecting spatial data collected over a period of time, creating temporal data cubes from the spatial data, and processing and/or analyzing the data using temporal mapping algebra functions. In some embodiments, the temporal data cube is creating a masked cube using the data cubes, and computing a composite from the masked cube by using temporal mapping algebra.

  20. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  1. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  2. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  3. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints.

    PubMed

    Fiore, Andrew M; Swan, James W

    2018-01-28

    Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.

  4. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  5. Tracking the Success of Pre-College Algebra Workshop Students in Subsequent College Mathematics Classes

    ERIC Educational Resources Information Center

    Fuller, Edgar; Deshler, Jessica M.; Kuhn, Betsy; Squire, Douglas

    2014-01-01

    In 2007 the Department of Mathematics at our institution began developing a placement process designed to identify at-risk students entering mathematics courses at the College Algebra and Calculus levels. Major changes in our placement testing process and the resulting interventions for at-risk students were put in place in Fall of 2008. At the…

  6. Using Technology to Optimize and Generalize: The Least-Squares Line

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  7. Tensor Algebra Library for NVidia Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakh, Dmitry

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAMmore » of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  8. Stochastic differential equation model for linear growth birth and death processes with immigration and emigration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granita, E-mail: granitafc@gmail.com; Bahar, A.

    This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.

  9. Interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies. The optimum strategy is also compared to the strategies used by subjects in a pilot experiment.

  10. An estimator for the relative entropy rate of path measures for stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  11. Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1991-09-01

    A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single-neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numerical algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked-potential data being collected to investigate genetic predispositions to alcoholism and to extract brain ``signatures'' of short-term memory. Using the numerical algorithm of very fast simulated reannealing, it is demonstrated that SMNI can indeed fit these data within experimentally observed ranges of its underlying neuronal-synaptic parameters, and the quantitative modeling results are used to examine physical neocortical mechanisms to discriminate high-risk and low-risk populations genetically predisposed to alcoholism. Since this study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is inconclusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent with EEG data during selective attention tasks and with neocortical mechanisms describing short-term memory previously published using this approach. This paper explicitly identifies similar nonlinear stochastic mechanisms of interaction at the microscopic-neuronal, mesoscopic-columnar, and macroscopic-regional scales of neocortical interactions. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons.

  12. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map

    PubMed Central

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.

    2010-01-01

    SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468

  13. Stochastic Nature in Cellular Processes

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Liu, Sheng-Jun; Wang, Qi; Yan, Shi-Wei; Geng, Yi-Zhao; Sakata, Fumihiko; Gao, Xing-Fa

    2011-11-01

    The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.

  14. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  15. Signal Processing for Radar Target Tracking and Identification

    DTIC Science & Technology

    1996-12-01

    Computes the likelihood for various potential jump moves. 12. matrix_mult.m: Parallel implementation of linear algebra ... Elementary Lineary Algebra with Applications, John Wiley k Sons, Inc., New York, 1987. [9] A. K. Bhattacharyya, and D. L. Sengupta, Radar Cross...Miller, ’Target Tracking and Recognition Using Jump-Diffusion Processes," ARO’s 11th Army Conf. on Applied Mathemat- ics and Computing, June 8-11

  16. A Nonlinear, Multiinput, Multioutput Process Control Laboratory Experiment

    ERIC Educational Resources Information Center

    Young, Brent R.; van der Lee, James H.; Svrcek, William Y.

    2006-01-01

    Experience in using a user-friendly software, Mathcad, in the undergraduate chemical reaction engineering course is discussed. Example problems considered for illustration deal with simultaneous solution of linear algebraic equations (kinetic parameter estimation), nonlinear algebraic equations (equilibrium calculations for multiple reactions and…

  17. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  18. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  19. Stochasticity in materials structure, properties, and processing—A review

    NASA Astrophysics Data System (ADS)

    Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai

    2018-03-01

    We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.

  20. The roles of prefrontal and posterior parietal cortex in algebra problem solving: a case of using cognitive modeling to inform neuroimaging data.

    PubMed

    Danker, Jared F; Anderson, John R

    2007-04-15

    In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.

  1. On the ``Matrix Approach'' to Interacting Particle Systems

    NASA Astrophysics Data System (ADS)

    de Sanctis, L.; Isopi, M.

    2004-04-01

    Derrida et al. and Schütz and Stinchcombe gave algebraic formulas for the correlation functions of the partially asymmetric simple exclusion process. Here we give a fairly general recipe of how to get these formulas and extend them to the whole time evolution (starting from the generator of the process), for a certain class of interacting systems. We then analyze the algebraic relations obtained to show that the matrix approach does not work with some models such as the voter and the contact processes.

  2. Statecharts Via Process Algebra

    NASA Technical Reports Server (NTRS)

    Luttgen, Gerald; vonderBeeck, Michael; Cleaveland, Rance

    1999-01-01

    Statecharts is a visual language for specifying the behavior of reactive systems. The Language extends finite-state machines with concepts of hierarchy, concurrency, and priority. Despite its popularity as a design notation for embedded system, precisely defining its semantics has proved extremely challenging. In this paper, a simple process algebra, called Statecharts Process Language (SPL), is presented, which is expressive enough for encoding Statecharts in a structure-preserving and semantic preserving manner. It is establish that the behavioral relation bisimulation, when applied to SPL, preserves Statecharts semantics

  3. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  4. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  5. Non-equilibrium relaxation in a stochastic lattice Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Täuber, Uwe C.

    2016-04-01

    We employ Monte Carlo simulations to study a stochastic Lotka-Volterra model on a two-dimensional square lattice with periodic boundary conditions. If the (local) prey carrying capacity is finite, there exists an extinction threshold for the predator population that separates a stable active two-species coexistence phase from an inactive state wherein only prey survive. Holding all other rates fixed, we investigate the non-equilibrium relaxation of the predator density in the vicinity of the critical predation rate. As expected, we observe critical slowing-down, i.e., a power law dependence of the relaxation time on the predation rate, and algebraic decay of the predator density at the extinction critical point. The numerically determined critical exponents are in accord with the established values of the directed percolation universality class. Following a sudden predation rate change to its critical value, one finds critical aging for the predator density autocorrelation function that is also governed by universal scaling exponents. This aging scaling signature of the active-to-absorbing state phase transition emerges at significantly earlier times than the stationary critical power laws, and could thus serve as an advanced indicator of the (predator) population’s proximity to its extinction threshold.

  6. Journal Writing: Enlivening Elementary Linear Algebra.

    ERIC Educational Resources Information Center

    Meel, David E.

    1999-01-01

    Examines the various issues surrounding the implementation of journal writing in an undergraduate linear algebra course. Identifies the benefits of incorporating journal writing into an undergraduate mathematics course, which are supported with students' comments from their journals and their reflections on the process. Contains 14 references.…

  7. Applications of rigged Hilbert spaces in quantum mechanics and signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, Paseo Belén 7, 47011 Valladolid; Gadella, M., E-mail: manuelgadella1@gmail.com

    Simultaneous use of discrete and continuous bases in quantum systems is not possible in the context of Hilbert spaces, but only in the more general structure of rigged Hilbert spaces (RHS). In addition, the relevant operators in RHS (but not in Hilbert space) are a realization of elements of a Lie enveloping algebra and support representations of semigroups. We explicitly construct here basis dependent RHS of the line and half-line and relate them to the universal enveloping algebras of the Weyl-Heisenberg algebra and su(1, 1), respectively. The complete sub-structure of both RHS and of the operators acting on them ismore » obtained from their algebraic structures or from the related fractional Fourier transforms. This allows us to describe both quantum and signal processing states and their dynamics. Two relevant improvements are introduced: (i) new kinds of filters related to restrictions to subspaces and/or the elimination of high frequency fluctuations and (ii) an operatorial structure that, starting from fix objects, describes their time evolution.« less

  8. Statistics of the stochastically forced Lorenz attractor by the Fokker-Planck equation and cumulant expansions.

    PubMed

    Allawala, Altan; Marston, J B

    2016-11-01

    We investigate the Fokker-Planck description of the equal-time statistics of the three-dimensional Lorenz attractor with additive white noise. The invariant measure is found by computing the zero (or null) mode of the linear Fokker-Planck operator as a problem of sparse linear algebra. Two variants are studied: a self-adjoint construction of the linear operator and the replacement of diffusion with hyperdiffusion. We also access the low-order statistics of the system by a perturbative expansion in equal-time cumulants. A comparison is made to statistics obtained by the standard approach of accumulation via direct numerical simulation. Theoretical and computational aspects of the Fokker-Planck and cumulant expansion methods are discussed.

  9. Developing learning environments which support early algebraic reasoning: a case from a New Zealand primary classroom

    NASA Astrophysics Data System (ADS)

    Hunter, Jodie

    2014-12-01

    Current reforms in mathematics education advocate the development of mathematical learning communities in which students have opportunities to engage in mathematical discourse and classroom practices which underlie algebraic reasoning. This article specifically addresses the pedagogical actions teachers take which structure student engagement in dialogical discourse and activity which facilitates early algebraic reasoning. Using videotaped recordings of classroom observations, the teacher and researcher collaboratively examined the classroom practices and modified the participatory practices to develop a learning environment which supported early algebraic reasoning. Facilitating change in the classroom environment was a lengthy process which required consistent and ongoing attention initially to the social norms and then to the socio-mathematical norms. Specific pedagogical actions such as the use of specifically designed tasks, materials and representations and a constant press for justification and generalisation were required to support students to link their numerical understandings to algebraic reasoning.

  10. Computing algebraic transfer entropy and coupling directions via transcripts

    NASA Astrophysics Data System (ADS)

    Amigó, José M.; Monetti, Roberto; Graff, Beata; Graff, Grzegorz

    2016-11-01

    Most random processes studied in nonlinear time series analysis take values on sets endowed with a group structure, e.g., the real and rational numbers, and the integers. This fact allows to associate with each pair of group elements a third element, called their transcript, which is defined as the product of the second element in the pair times the first one. The transfer entropy of two such processes is called algebraic transfer entropy. It measures the information transferred between two coupled processes whose values belong to a group. In this paper, we show that, subject to one constraint, the algebraic transfer entropy matches the (in general, conditional) mutual information of certain transcripts with one variable less. This property has interesting practical applications, especially to the analysis of short time series. We also derive weak conditions for the 3-dimensional algebraic transfer entropy to yield the same coupling direction as the corresponding mutual information of transcripts. A related issue concerns the use of mutual information of transcripts to determine coupling directions in cases where the conditions just mentioned are not fulfilled. We checked the latter possibility in the lowest dimensional case with numerical simulations and cardiovascular data, and obtained positive results.

  11. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    ERIC Educational Resources Information Center

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  12. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  13. Work Measurements: Interdisciplinary Overlap in Manufacturing and Algebra I

    ERIC Educational Resources Information Center

    Rose, Mary Annette

    2007-01-01

    Manufacturing engineering provides a relevant context from which to envision interdisciplinary learning experiences because engineers integrate their knowledge and skills of manufacturing and algebra processes in order to plan the efficient manufacture of products. In this article, the author describes an interdisciplinary learning activity that…

  14. Applications of Maple To Algebraic Cryptography.

    ERIC Educational Resources Information Center

    Sigmon, Neil P.

    1997-01-01

    Demonstrates the use of technology to enhance the appreciation of applications involving abstract algebra. The symbolic manipulator Maple can perform computations required for a linear cryptosystem. One major benefit of this process is that students can encipher and decipher messages using a linear cryptosystem without becoming confused and…

  15. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  16. BACKWARD ESTIMATION OF STOCHASTIC PROCESSES WITH FAILURE EVENTS AS TIME ORIGINS1

    PubMed Central

    Gary Chan, Kwun Chuen; Wang, Mei-Cheng

    2011-01-01

    Stochastic processes often exhibit sudden systematic changes in pattern a short time before certain failure events. Examples include increase in medical costs before death and decrease in CD4 counts before AIDS diagnosis. To study such terminal behavior of stochastic processes, a natural and direct way is to align the processes using failure events as time origins. This paper studies backward stochastic processes counting time backward from failure events, and proposes one-sample nonparametric estimation of the mean of backward processes when follow-up is subject to left truncation and right censoring. We will discuss benefits of including prevalent cohort data to enlarge the identifiable region and large sample properties of the proposed estimator with related extensions. A SEER–Medicare linked data set is used to illustrate the proposed methodologies. PMID:21359167

  17. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  18. Dynamical Correspondence in a Generalized Quantum Theory

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    2015-05-01

    In order to figure out why quantum physics needs the complex Hilbert space, many attempts have been made to distinguish the C*-algebras and von Neumann algebras in more general classes of abstractly defined Jordan algebras (JB- and JBW-algebras). One particularly important distinguishing property was identified by Alfsen and Shultz and is the existence of a dynamical correspondence. It reproduces the dual role of the selfadjoint operators as observables and generators of dynamical groups in quantum mechanics. In the paper, this concept is extended to another class of nonassociative algebras, arising from recent studies of the quantum logics with a conditional probability calculus and particularly of those that rule out third-order interference. The conditional probability calculus is a mathematical model of the Lüders-von Neumann quantum measurement process, and third-order interference is a property of the conditional probabilities which was discovered by Sorkin (Mod Phys Lett A 9:3119-3127, 1994) and which is ruled out by quantum mechanics. It is shown then that the postulates that a dynamical correspondence exists and that the square of any algebra element is positive still characterize, in the class considered, those algebras that emerge from the selfadjoint parts of C*-algebras equipped with the Jordan product. Within this class, the two postulates thus result in ordinary quantum mechanics using the complex Hilbert space or, vice versa, a genuine generalization of quantum theory must omit at least one of them.

  19. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  20. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes—part III extensions and applications to kinetic theory and transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2017-08-01

    This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.

  1. Real-Time Algebraic Derivative Estimations Using a Novel Low-Cost Architecture Based on Reconfigurable Logic

    PubMed Central

    Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos

    2014-01-01

    Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033

  2. Extinction and survival in two-species annihilation

    NASA Astrophysics Data System (ADS)

    Amar, J. G.; Ben-Naim, E.; Davis, S. M.; Krapivsky, P. L.

    2018-02-01

    We study diffusion-controlled two-species annihilation with a finite number of particles. In this stochastic process, particles move diffusively, and when two particles of opposite type come into contact, the two annihilate. We focus on the behavior in three spatial dimensions and for initial conditions where particles are confined to a compact domain. Generally, one species outnumbers the other, and we find that the difference between the number of majority and minority species, which is a conserved quantity, controls the behavior. When the number difference exceeds a critical value, the minority becomes extinct and a finite number of majority particles survive, while below this critical difference, a finite number of particles of both species survive. The critical difference Δc grows algebraically with the total initial number of particles N , and when N ≫1 , the critical difference scales as Δc˜N1 /3 . Furthermore, when the initial concentrations of the two species are equal, the average number of surviving majority and minority particles, M+ and M-, exhibit two distinct scaling behaviors, M+˜N1 /2 and M-˜N1 /6 . In contrast, when the initial populations are equal, these two quantities are comparable M+˜M-˜N1 /3 .

  3. Workshop on data acquisition and trigger system simulations for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- anmore » Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.« less

  4. Processes and Reasoning in Representations of Linear Functions

    ERIC Educational Resources Information Center

    Adu-Gyamfi, Kwaku; Bossé, Michael J.

    2014-01-01

    This study examined student actions, interpretations, and language in respect to questions raised regarding tabular, graphical, and algebraic representations in the context of functions. The purpose was to investigate students' interpretations and specific ways of working within table, graph, and the algebraic on notions fundamental to a…

  5. Algebra for All: California's Eighth-Grade Algebra Initiative as Constrained Curricula

    ERIC Educational Resources Information Center

    Domina, Thurston; Penner, Andrew M.; Penner, Emily K.; Conley, AnneMarie

    2014-01-01

    Background/Context: Across the United States, secondary school curricula are intensifying as a growing proportion of students enroll in high-level academic math courses. In many districts, this intensification process occurs as early as eighth grade, where schools are effectively constraining their mathematics curricula by restricting course…

  6. Reading Bombelli's x-purgated Algebra.

    ERIC Educational Resources Information Center

    Arcavi, Abraham; Bruckheimer, Maxim

    1991-01-01

    Presents the algorithm to approximate square roots as reproduced from the 1579 edition of an algebra book by Rafael Bombelli. The sequence of activities illustrates that the process of understanding an original source of mathematics, first at the algorithmic level and then with respect to its mathematical validity in modern terms, can be an…

  7. Teaching Linear Algebra: Proceeding More Efficiently by Staying Comfortably within Z

    ERIC Educational Resources Information Center

    Beaver, Scott

    2015-01-01

    For efficiency in a linear algebra course the instructor may wish to avoid the undue arithmetical distractions of rational arithmetic. In this paper we explore how to write fraction-free problems of various types including elimination, matrix inverses, orthogonality, and the (non-normalizing) Gram-Schmidt process.

  8. The Jukes-Cantor Model of Molecular Evolution

    ERIC Educational Resources Information Center

    Erickson, Keith

    2010-01-01

    The material in this module introduces students to some of the mathematical tools used to examine molecular evolution. This topic is standard fare in many mathematical biology or bioinformatics classes, but could also be suitable for classes in linear algebra or probability. While coursework in matrix algebra, Markov processes, Monte Carlo…

  9. Using Computer Symbolic Algebra to Solve Differential Equations.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    This article illustrates that mathematical theory can be incorporated into the process to solve differential equations by a computer algebra system, muMATH. After an introduction to functions of muMATH, several short programs for enhancing the capabilities of the system are discussed. Listed are six references. (YP)

  10. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  11. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  12. High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices

    NASA Astrophysics Data System (ADS)

    Dunham, Benjamin Z.

    This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.

  13. Contextualizing symbol, symbolizing context

    NASA Astrophysics Data System (ADS)

    Maudy, Septiani Yugni; Suryadi, Didi; Mulyana, Endang

    2017-08-01

    When students learn algebra for the first time, inevitably they are experiencing transition from arithmetic to algebraic thinking. Once students could apprehend this essential mathematical knowledge, they are cultivating their ability in solving daily life problems by applying algebra. However, as we dig into this transitional stage, we identified possible students' learning obstacles to be dealt with seriously in order to forestall subsequent hindrance in studying more advance algebra. We come to realize this recurring problem as we undertook the processes of re-personalization and re-contextualization in which we scrutinize the very basic questions: 1) what is variable, linear equation with one variable and their relationship with the arithmetic-algebraic thinking? 2) Why student should learn such concepts? 3) How to teach those concepts to students? By positioning ourselves as a seventh grade student, we address the possibility of children to think arithmetically when confronted with the problems of linear equation with one variable. To help them thinking algebraically, Bruner's modes of representation developed contextually from concrete to abstract were delivered to enhance their interpretation toward the idea of variables. Hence, from the outset we designed the context for student to think symbolically initiated by exploring various symbols that could be contextualized in order to bridge student traversing the arithmetic-algebraic fruitfully.

  14. A systematic investigation of the link between rational number processing and algebra ability.

    PubMed

    Hurst, Michelle; Cordes, Sara

    2018-02-01

    Recent research suggests that fraction understanding is predictive of algebra ability; however, the relative contributions of various aspects of rational number knowledge are unclear. Furthermore, whether this relationship is notation-dependent or rather relies upon a general understanding of rational numbers (independent of notation) is an open question. In this study, college students completed a rational number magnitude task, procedural arithmetic tasks in fraction and decimal notation, and an algebra assessment. Using these tasks, we measured three different aspects of rational number ability in both fraction and decimal notation: (1) acuity of underlying magnitude representations, (2) fluency with which symbols are mapped to the underlying magnitudes, and (3) fluency with arithmetic procedures. Analyses reveal that when looking at the measures of magnitude understanding, the relationship between adults' rational number magnitude performance and algebra ability is dependent upon notation. However, once performance on arithmetic measures is included in the relationship, individual measures of magnitude understanding are no longer unique predictors of algebra performance. Furthermore, when including all measures simultaneously, results revealed that arithmetic fluency in both fraction and decimal notation each uniquely predicted algebra ability. Findings are the first to demonstrate a relationship between rational number understanding and algebra ability in adults while providing a clearer picture of the nature of this relationship. © 2017 The British Psychological Society.

  15. Trapping in scale-free networks with hierarchical organization of modularity.

    PubMed

    Zhang, Zhongzhi; Lin, Yuan; Gao, Shuyang; Zhou, Shuigeng; Guan, Jihong; Li, Mo

    2009-11-01

    A wide variety of real-life networks share two remarkable generic topological properties: scale-free behavior and modular organization, and it is natural and important to study how these two features affect the dynamical processes taking place on such networks. In this paper, we investigate a simple stochastic process--trapping problem, a random walk with a perfect trap fixed at a given location, performed on a family of hierarchical networks that exhibit simultaneously striking scale-free and modular structure. We focus on a particular case with the immobile trap positioned at the hub node having the largest degree. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping problem, which is the mean of the node-to-trap first-passage time over the entire network. The exact expression for the MFPT is calculated through the recurrence relations derived from the special construction of the hierarchical networks. The obtained rigorous formula corroborated by extensive direct numerical calculations exhibits that the MFPT grows algebraically with the network order. Concretely, the MFPT increases as a power-law function of the number of nodes with the exponent much less than 1. We demonstrate that the hierarchical networks under consideration have more efficient structure for transport by diffusion in contrast with other analytically soluble media including some previously studied scale-free networks. We argue that the scale-free and modular topologies are responsible for the high efficiency of the trapping process on the hierarchical networks.

  16. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  17. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  18. Generation of Custom DSP Transform IP Cores: Case Study Walsh-Hadamard Transform

    DTIC Science & Technology

    2002-09-01

    mathematics and hardware design What I know: Finite state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing...state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing Adaptive filter theory … A math guy A hardware engineer...Synthesis Technology Libary Bit-width (8) HF factor (1,2,3,6) VF factor (1,2,4, ... 32) Xilinx FPGA Place&Route Xilinx FPGA Place&Route Performance

  19. Priority in Process Algebras

    NASA Technical Reports Server (NTRS)

    Cleaveland, Rance; Luettgen, Gerald; Natarajan, V.

    1999-01-01

    This paper surveys the semantic ramifications of extending traditional process algebras with notions of priority that allow for some transitions to be given precedence over others. These enriched formalisms allow one to model system features such as interrupts, prioritized choice, or real-time behavior. Approaches to priority in process algebras can be classified according to whether the induced notion of preemption on transitions is global or local and whether priorities are static or dynamic. Early work in the area concentrated on global pre-emption and static priorities and led to formalisms for modeling interrupts and aspects of real-time, such as maximal progress, in centralized computing environments. More recent research has investigated localized notions of pre-emption in which the distribution of systems is taken into account, as well as dynamic priority approaches, i.e., those where priority values may change as systems evolve. The latter allows one to model behavioral phenomena such as scheduling algorithms and also enables the efficient encoding of real-time semantics. Technically, this paper studies the different models of priorities by presenting extensions of Milner's Calculus of Communicating Systems (CCS) with static and dynamic priority as well as with notions of global and local pre- emption. In each case the operational semantics of CCS is modified appropriately, behavioral theories based on strong and weak bisimulation are given, and related approaches for different process-algebraic settings are discussed.

  20. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  1. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  2. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  3. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  4. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  5. Gene regulation and noise reduction by coupling of stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  6. Gene regulation and noise reduction by coupling of stochastic processes

    PubMed Central

    Hornos, José Eduardo M.; Reinitz, John

    2015-01-01

    Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447

  7. Gene regulation and noise reduction by coupling of stochastic processes.

    PubMed

    Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  8. Geometric and Algebraic Approaches in the Concept of Complex Numbers

    ERIC Educational Resources Information Center

    Panaoura, A.; Elia, I.; Gagatsis, A.; Giatilis, G.-P.

    2006-01-01

    This study explores pupils' performance and processes in tasks involving equations and inequalities of complex numbers requiring conversions from a geometric representation to an algebraic representation and conversions in the reverse direction, and also in complex numbers problem solving. Data were collected from 95 pupils of the final grade from…

  9. Validating Cognitive Models of Task Performance in Algebra on the SAT®. Research Report No. 2009-3

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele

    2009-01-01

    The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.

  10. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  11. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  12. Stochastic Calculus and Differential Equations for Physics and Finance

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2013-02-01

    1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.

  13. Mathematics of gravitational lensing: multiple imaging and magnification

    NASA Astrophysics Data System (ADS)

    Petters, A. O.; Werner, M. C.

    2010-09-01

    The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.

  14. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Taulbee, Dale B.; Adumitroaie, Virgil; Sabini, George J.; Shieh, Geoffrey S.

    1994-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Sep. 1993 - 1 Sep. 1994, we have focused our efforts on two research problems: (1) developments of 'algebraic' moment closures for statistical descriptions of nonpremixed reacting systems, and (2) assessments of the Dirichlet frequency in presumed scalar probability density function (PDF) methods in stochastic description of turbulent reacting flows. This report provides a complete description of our efforts during this past year as supported by the NASA Langley Research Center under Grant NAG1-1122.

  15. Structure preserving noise and dissipation in the Toda lattice

    NASA Astrophysics Data System (ADS)

    Arnaudon, Alexis

    2018-05-01

    In this paper, we use Flaschka’s change of variables of the open Toda lattice and its interpretation in terms of the group structure of the LU factorisation as a coadjoint motion on a certain dual of the Lie algebra to implement a structure preserving noise and dissipation. Both preserve the structure of the coadjoint orbit, that is the space of symmetric tri-diagonal matrices and arise as a new type of multiplicative noise and nonlinear dissipation of the Toda lattice. We investigate some of the properties of these deformations and, in particular, the continuum limit as a stochastic Burger equation with a nonlinear viscosity. This work is meant to be exploratory, and open more questions that we can answer with simple mathematical tools and without numerical simulations.

  16. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Treesearch

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  17. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    NASA Astrophysics Data System (ADS)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)

  18. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  19. Exact solution of some linear matrix equations using algebraic methods

    NASA Technical Reports Server (NTRS)

    Djaferis, T. E.; Mitter, S. K.

    1979-01-01

    Algebraic methods are used to construct the exact solution P of the linear matrix equation PA + BP = - C, where A, B, and C are matrices with real entries. The emphasis of this equation is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The paper is divided into six sections which include the proof of the basic lemma, the Liapunov equation, and the computer implementation for the rational, integer and modular algorithms. Two numerical examples are given and the entire calculation process is depicted.

  20. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  1. Teachers' Understanding of Algebraic Generalization

    NASA Astrophysics Data System (ADS)

    Hawthorne, Casey Wayne

    Generalization has been identified as a cornerstone of algebraic thinking (e.g., Lee, 1996; Sfard, 1995) and is at the center of a rich conceptualization of K-8 algebra (Kaput, 2008; Smith, 2003). Moreover, mathematics teachers are being encouraged to use figural-pattern generalizing tasks as a basis of student-centered instruction, whereby teachers respond to and build upon the ideas that arise from students' explorations of these activities. Although more and more teachers are engaging their students in such generalizing tasks, little is known about teachers' understanding of generalization and their understanding of students' mathematical thinking in this domain. In this work, I addressed this gap, exploring the understanding of algebraic generalization of 4 exemplary 8th-grade teachers from multiple perspectives. A significant feature of this investigation is an examination of teachers' understanding of the generalization process, including the use of algebraic symbols. The research consisted of two phases. Phase I was an examination of the teachers' understandings of the underlying quantities and quantitative relationships represented by algebraic notation. In Phase II, I observed the instruction of 2 of these teachers. Using the lens of professional noticing of students' mathematical thinking, I explored the teachers' enacted knowledge of algebraic generalization, characterizing how it supported them to effectively respond to the needs and queries of their students. Results indicated that teachers predominantly see these figural patterns as enrichment activities, disconnected from course content. Furthermore, in my analysis, I identified conceptual difficulties teachers experienced when solving generalization tasks, in particular, connecting multiple symbolic representations with the quantities in the figures. Moreover, while the teachers strived to overcome the challenges of connecting different representations, they invoked both productive and unproductive conceptualizations of the symbols. Finally, by comparing two teachers' understandings of student thinking in the classroom, I developed an instructional trajectory to describe steps along students' generalization processes. This emergent framework serves as an instructional tool for teachers' use in identifying significant connections in supporting students to develop understanding of algebraic symbols as representations that communicate the quantities perceived in the figure.

  2. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  3. Stochastic associative memory

    NASA Astrophysics Data System (ADS)

    Baumann, Erwin W.; Williams, David L.

    1993-08-01

    Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.

  4. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  5. Apprentissage dans un Environnement Informatique: Possibilite, Nature, Transfert des Acquis (Learning within an Information Environment: Possibilities, Native, and Transfer of Ideas).

    ERIC Educational Resources Information Center

    Dagher, Antoine

    1996-01-01

    Examines possibilities for learning offered by a piece of software, Fonctuse, likely to encourage the linking of algebraic and graphical representations of functions. Studied the influence of prior algebraic knowledge on the cognitive processes and constructions of knowledge at play in this environment. (Author/MKR)

  6. Experiences in Evaluating Outcomes in Tool-Based, Competence Building Education in Dynamical Systems Using Symbolic Computer Algebra

    ERIC Educational Resources Information Center

    Perram, John W.; Andersen, Morten; Ellekilde, Lars-Peter; Hjorth, Poul G.

    2004-01-01

    This paper discusses experience with alternative assessment strategies for an introductory course in dynamical systems, where the use of computer algebra and calculus is fully integrated into the learning process, so that the standard written examination would not be appropriate. Instead, students' competence was assessed by grading three large…

  7. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  8. Fluctuation theorem: A critical review

    NASA Astrophysics Data System (ADS)

    Malek Mansour, M.; Baras, F.

    2017-10-01

    Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.

  9. Process Algebra Approach for Action Recognition in the Maritime Domain

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    2011-01-01

    The maritime environment poses a number of challenges for autonomous operation of surface boats. Among these challenges are the highly dynamic nature of the environment, the onboard sensing and reasoning requirements for obeying the navigational rules of the road, and the need for robust day/night hazard detection and avoidance. Development of full mission level autonomy entails addressing these challenges, coupled with inference of the tactical and strategic intent of possibly adversarial vehicles in the surrounding environment. This paper introduces PACIFIC (Process Algebra Capture of Intent From Information Content), an onboard system based on formal process algebras that is capable of extracting actions/activities from sensory inputs and reasoning within a mission context to ensure proper responses. PACIFIC is part of the Behavior Engine in CARACaS (Cognitive Architecture for Robotic Agent Command and Sensing), a system that is currently running on a number of U.S. Navy unmanned surface and underwater vehicles. Results from a series of experimental studies that demonstrate the effectiveness of the system are also presented.

  10. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  11. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  12. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  13. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process.

    PubMed

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-27

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  14. Analyzing long-term correlated stochastic processes by means of recurrence networks: Potentials and pitfalls

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Donner, Reik V.; Kurths, Jürgen

    2015-02-01

    Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.

  15. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  16. The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)

    NASA Astrophysics Data System (ADS)

    2017-09-01

    The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary

  17. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  18. Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi

    2015-04-01

    The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.

  19. Stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobczyk, K.

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less

  20. Relativistic analysis of stochastic kinematics

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.

  1. Application of the Firefly and Luus-Jaakola algorithms in the calculation of a double reactive azeotrope

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2014-01-01

    The calculation of reactive azeotropes is an important task in the preliminary design and simulation of reactive distillation columns. Classically, homogeneous nonreactive azeotropes are vapor-liquid coexistence conditions where phase compositions are equal. For homogeneous reactive azeotropes, simultaneous phase and chemical equilibria occur concomitantly with equality of compositions (in the Ung-Doherty transformed space). The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. In a previous paper (Platt et al 2013 J. Phys.: Conf. Ser. 410 012020), we investigated some numerical aspects of the calculation of reactive azeotropes in the isobutene + methanol + methyl-tert-butyl-ether (with two reactive azeotropes) system using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Here, we use a hybrid structure (stochastic + deterministic) in order to produce accurate results for both azeotropes. After identifying the neighborhood of the reactive azeotrope, the nonlinear algebraic system is solved using Newton's method. The results indicate that using metaheuristics and some techniques devoted to the calculation of multiple minima allows both azeotropic coordinates in this reactive system to be obtains. In this sense, we provide a comprehensive analysis of a useful framework devoted to solving nonlinear systems, particularly in phase equilibrium problems.

  2. Determination of key parameters of vector multifractal vector fields

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2017-12-01

    For too long time, multifractal analyses and simulations have been restricted to scalar-valued fields (Schertzer and Tchiguirinskaia, 2017a,b). For instance, the wind velocity multifractality has been mostly analysed in terms of scalar structure functions and with the scalar energy flux. This restriction has had the unfortunate consequences that multifractals were applicable to their full extent in geophysics, whereas it has inspired them. Indeed a key question in geophysics is the complexity of the interactions between various fields or they components. Nevertheless, sophisticated methods have been developed to determine the key parameters of scalar valued fields. In this communication, we first present the vector extensions of the universal multifractal analysis techniques to multifractals whose generator belong to a Levy-Clifford algebra (Schertzer and Tchiguirinskaia, 2015). We point out further extensions noting the increased complexity. For instance, the (scalar) index of multifractality becomes a matrice. Schertzer, D. and Tchiguirinskaia, I. (2015) `Multifractal vector fields and stochastic Clifford algebra', Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p. 123127. doi: 10.1063/1.4937364. Schertzer, D. and Tchiguirinskaia, I. (2017) `An Introduction to Multifractals and Scale Symmetry Groups', in Ghanbarian, B. and Hunt, A. (eds) Fractals: Concepts and Applications in Geosciences. CRC Press, p. (in press). Schertzer, D. and Tchiguirinskaia, I. (2017b) `Pandora Box of Multifractals: Barely Open ?', in Tsonis, A. A. (ed.) 30 Years of Nonlinear Dynamics in Geophysics. Berlin: Springer, p. (in press).

  3. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment

    ERIC Educational Resources Information Center

    Warner, Zachary B.

    2013-01-01

    This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…

  4. Effects of Argumentation on Group Micro-Creativity: Statistical Discourse Analyses of Algebra Students' Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Chiu, Ming Ming

    2008-01-01

    The micro-time context of group processes (such as argumentation) can affect a group's micro-creativity (new ideas). Eighty high school students worked in groups of four on an algebra problem. Groups with higher mathematics grades showed greater micro-creativity, and both were linked to better problem solving outcomes. Dynamic multilevel analyses…

  5. Image Processing Language. Phase 1

    DTIC Science & Technology

    1988-05-01

    their entirety. Nonetheless, they can serve as guidelines to which the construction of a useful and comprehensive imaging algebra might aspire. 3. TIH... guidelines to which the construction of a useful and comprehensive imaging algebra might aspire. * It was recognized that any structure which encompasses...Bernstein Polynomial Approximation Best Plane Fit ( BPF , Sobel, Roberts, Prewitt, Gradient) Boundary Finder Boundary Segmenter Chain Code Angle

  6. Early Algebra with Graphics Software as a Type II Application of Technology

    ERIC Educational Resources Information Center

    Abramovich, Sergei

    2006-01-01

    This paper describes the use of Kid Pix-graphics software for creative activities of young children--in the context of early algebra as determined by the mathematics core curriculum of New York state. It shows how grade-two appropriate pedagogy makes it possible to bring about a qualitative change in the learning process of those commonly…

  7. Application of the algebraic difference approach for developing self-referencing specific gravity and biomass equations

    Treesearch

    Lewis Jordan; Ray Souter; Bernard Parresol; Richard F. Daniels

    2006-01-01

    Biomass estimation is critical for looking at ecosystem processes and as a measure of stand yield. The density-integral approach allows for coincident estimation of stem profile and biomass. The algebraic difference approach (ADA) permits the derivation of dynamic or nonstatic functions. In this study we applied the ADA to develop a self-referencing specific gravity...

  8. Motion Planning in a Society of Intelligent Mobile Agents

    NASA Technical Reports Server (NTRS)

    Esterline, Albert C.; Shafto, Michael (Technical Monitor)

    2002-01-01

    The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.

  9. NPTool: Towards Scalability and Reliability of Business Process Management

    NASA Astrophysics Data System (ADS)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  10. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  11. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  12. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  13. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Method of implementation of optoelectronic multiparametric signal processing systems based on multivalued-logic principles

    NASA Astrophysics Data System (ADS)

    Arestova, M. L.; Bykovskii, A. Yu

    1995-10-01

    An architecture is proposed for a specialised optoelectronic multivalued logic processor based on the Allen—Givone algebra. The processor is intended for multiparametric processing of data arriving from a large number of sensors or for tackling spectral analysis tasks. The processor architecture makes it possible to obtain an approximate general estimate of the state of an object being diagnosed on a p-level scale. Optoelectronic systems are proposed for MAXIMUM, MINIMUM, and LITERAL logic gates, based on optical-frequency encoding of logic levels. Corresponding logic gates form a complete set of logic functions in the Allen—Givone algebra.

  14. Children's understanding of fraction and decimal symbols and the notation-specific relation to pre-algebra ability.

    PubMed

    Hurst, Michelle A; Cordes, Sara

    2018-04-01

    Fraction and decimal concepts are notoriously difficult for children to learn yet are a major component of elementary and middle school math curriculum and an important prerequisite for higher order mathematics (i.e., algebra). Thus, recently there has been a push to understand how children think about rational number magnitudes in order to understand how to promote rational number understanding. However, prior work investigating these questions has focused almost exclusively on fraction notation, overlooking the open questions of how children integrate rational number magnitudes presented in distinct notations (i.e., fractions, decimals, and whole numbers) and whether understanding of these distinct notations may independently contribute to pre-algebra ability. In the current study, we investigated rational number magnitude and arithmetic performance in both fraction and decimal notation in fourth- to seventh-grade children. We then explored how these measures of rational number ability predicted pre-algebra ability. Results reveal that children do represent the magnitudes of fractions and decimals as falling within a single numerical continuum and that, despite greater experience with fraction notation, children are more accurate when processing decimal notation than when processing fraction notation. Regression analyses revealed that both magnitude and arithmetic performance predicted pre-algebra ability, but magnitude understanding may be particularly unique and depend on notation. The educational implications of differences between children in the current study and previous work with adults are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Analytical approximations for spatial stochastic gene expression in single cells and tissues

    PubMed Central

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-01-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  17. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  18. Pole-placement Predictive Functional Control for under-damped systems with real numbers algebra.

    PubMed

    Zabet, K; Rossiter, J A; Haber, R; Abdullah, M

    2017-11-01

    This paper presents the new algorithm of PP-PFC (Pole-placement Predictive Functional Control) for stable, linear under-damped higher-order processes. It is shown that while conventional PFC aims to get first-order exponential behavior, this is not always straightforward with significant under-damped modes and hence a pole-placement PFC algorithm is proposed which can be tuned more precisely to achieve the desired dynamics, but exploits complex number algebra and linear combinations in order to deliver guarantees of stability and performance. Nevertheless, practical implementation is easier by avoiding complex number algebra and hence a modified formulation of the PP-PFC algorithm is also presented which utilises just real numbers while retaining the key attributes of simple algebra, coding and tuning. The potential advantages are demonstrated with numerical examples and real-time control of a laboratory plant. Copyright © 2017 ISA. All rights reserved.

  19. Some remarks on quantum physics, stochastic processes, and nonlinear filtering theory

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2016-05-01

    The mathematical similarities between quantum mechanics and stochastic processes has been studied in the literature. Some of the major results are reviewed, such as the relationship between the Fokker-Planck equation and the Schrödinger equation. Also reviewed are more recent results that show the mathematical similarities between quantum many particle systems and concepts in other areas of applied science, such as stochastic Petri nets. Some connections to filtering theory are discussed.

  20. QuBiLS-MAS, open source multi-platform software for atom- and bond-based topological (2D) and chiral (2.5D) algebraic molecular descriptors computations.

    PubMed

    Valdés-Martiní, José R; Marrero-Ponce, Yovani; García-Jacas, César R; Martinez-Mayorga, Karina; Barigye, Stephen J; Vaz d'Almeida, Yasser Silveira; Pham-The, Hai; Pérez-Giménez, Facundo; Morell, Carlos A

    2017-06-07

    In previous reports, Marrero-Ponce et al. proposed algebraic formalisms for characterizing topological (2D) and chiral (2.5D) molecular features through atom- and bond-based ToMoCoMD-CARDD (acronym for Topological Molecular Computational Design-Computer Aided Rational Drug Design) molecular descriptors. These MDs codify molecular information based on the bilinear, quadratic and linear algebraic forms and the graph-theoretical electronic-density and edge-adjacency matrices in order to consider atom- and bond-based relations, respectively. These MDs have been successfully applied in the screening of chemical compounds of different therapeutic applications ranging from antimalarials, antibacterials, tyrosinase inhibitors and so on. To compute these MDs, a computational program with the same name was initially developed. However, this in house software barely offered the functionalities required in contemporary molecular modeling tasks, in addition to the inherent limitations that made its usability impractical. Therefore, the present manuscript introduces the QuBiLS-MAS (acronym for Quadratic, Bilinear and N-Linear mapS based on graph-theoretic electronic-density Matrices and Atomic weightingS) software designed to compute topological (0-2.5D) molecular descriptors based on bilinear, quadratic and linear algebraic forms for atom- and bond-based relations. The QuBiLS-MAS module was designed as standalone software, in which extensions and generalizations of the former ToMoCoMD-CARDD 2D-algebraic indices are implemented, considering the following aspects: (a) two new matrix normalization approaches based on double-stochastic and mutual probability formalisms; (b) topological constraints (cut-offs) to take into account particular inter-atomic relations; (c) six additional atomic properties to be used as weighting schemes in the calculation of the molecular vectors; (d) four new local-fragments to consider molecular regions of interest; (e) number of lone-pair electrons in chemical structure defined by diagonal coefficients in matrix representations; and (f) several aggregation operators (invariants) applied over atom/bond-level descriptors in order to compute global indices. This software permits the parallel computation of the indices, contains a batch processing module and data curation functionalities. This program was developed in Java v1.7 using the Chemistry Development Kit library (version 1.4.19). The QuBiLS-MAS software consists of two components: a desktop interface (GUI) and an API library allowing for the easy integration of the latter in chemoinformatics applications. The relevance of the novel extensions and generalizations implemented in this software is demonstrated through three studies. Firstly, a comparative Shannon's entropy based variability study for the proposed QuBiLS-MAS and the DRAGON indices demonstrates superior performance for the former. A principal component analysis reveals that the QuBiLS-MAS approach captures chemical information orthogonal to that codified by the DRAGON descriptors. Lastly, a QSAR study for the binding affinity to the corticosteroid-binding globulin using Cramer's steroid dataset is carried out. From these analyses, it is revealed that the QuBiLS-MAS approach for atom-pair relations yields similar-to-superior performance with regard to other QSAR methodologies reported in the literature. Therefore, the QuBiLS-MAS approach constitutes a useful tool for the diversity analysis of chemical compound datasets and high-throughput screening of structure-activity data.

  1. Universal Long Ranged Correlations in Driven Binary Mixtures

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2017-03-01

    When two populations of "particles" move in opposite directions, like oppositely charged colloids under an electric field or intersecting flows of pedestrians, they can move collectively, forming lanes along their direction of motion. The nature of this "laning transition" is still being debated and, in particular, the pair correlation functions, which are the key observables to quantify this phenomenon, have not been characterized yet. Here, we determine the correlations using an analytical approach based on a linearization of the stochastic equations for the density fields, which is valid for dense systems of soft particles. We find that the correlations decay algebraically along the direction of motion, and have a self-similar exponential profile in the transverse direction. Brownian dynamics simulations confirm our theoretical predictions and show that they also hold beyond the validity range of our analytical approach, pointing to a universal behavior.

  2. Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*

    DOE PAGES

    Bank, R.; Falgout, R. D.; Jones, T.; ...

    2015-10-29

    In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less

  3. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  4. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  5. Markovian limit for a reduced operation-valued stochastic process

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto

    1987-04-01

    Operation-valued stochastic processes give a formalization of the concept of continuous (in time) measurements in quantum mechanics. In this article, a first stage M of a measuring apparatus coupled to the system S is explicitly introduced, and continuous measurement of some observables of M is considered (one can speak of an indirect continuous measurement on S). When the degrees of freedom of the measuring apparatus M are eliminated and the weak coupling limit is taken, it is shown that an operation-valued stochastic process describing a direct continuous observation of the system S is obtained.

  6. Models for interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.

  7. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  8. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  9. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  10. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  11. How Visual Imagery Contributed to College: A Case of How Visual Imagery Contributes to a College Algebra Student's Understanding of the Concept of Function in the United States

    ERIC Educational Resources Information Center

    Lane, Rebekah M.

    2011-01-01

    This investigation utilized the qualitative case study method. Seventy-one College Algebra students were given a mathematical processing instrument. This testing device measured a student's preference for visual thinking. Two students were purposefully selected using the instrument. The visual mathematical learner (VL) was discussed in this…

  12. The critical domain size of stochastic population models.

    PubMed

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  13. Extinction and survival in two-species annihilation

    DOE PAGES

    Amar, J. G.; Ben-Naim, E.; Davis, S. M.; ...

    2018-02-09

    In this paper, we study diffusion-controlled two-species annihilation with a finite number of particles. In this stochastic process, particles move diffusively, and when two particles of opposite type come into contact, the two annihilate. We focus on the behavior in three spatial dimensions and for initial conditions where particles are confined to a compact domain. Generally, one species outnumbers the other, and we find that the difference between the number of majority and minority species, which is a conserved quantity, controls the behavior. When the number difference exceeds a critical value, the minority becomes extinct and a finite number of majority particles survive, while below this critical difference, a finite number of particles of both species survive. The critical differencemore » $${\\mathrm{{\\Delta}}}_{c}$$ grows algebraically with the total initial number of particles N, and when $$N{\\gg}1$$, the critical difference scales as $${\\mathrm{{\\Delta}}}_{c}{\\sim}{N}^{1/3}$$. Furthermore, when the initial concentrations of the two species are equal, the average number of surviving majority and minority particles $${M}_{+}$$ and $${M}_{{-}}$$, exhibit two distinct scaling behaviors, $${M}_{+}{\\sim}{N}^{1/2}$$ and $${M}_{{-}}{\\sim}{N}^{1/6}$$. Finally, in contrast, when the initial populations are equal, these two quantities are comparable $${M}_{+}{\\sim}{M}_{{-}}{\\sim}{N}^{1/3}$$.« less

  14. Phase Transition for the Maki-Thompson Rumour Model on a Small-World Network

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Pachon, Angelica; Rodriguez, Pablo M.; Tavani, Flavia

    2017-11-01

    We consider the Maki-Thompson model for the stochastic propagation of a rumour within a population. In this model the population is made up of "spreaders", "ignorants" and "stiflers"; any spreader attempts to pass the rumour to the other individuals via pair-wise interactions and in case the other individual is an ignorant, it becomes a spreader, while in the other two cases the initiating spreader turns into a stifler. In a finite population the process will eventually reach an equilibrium situation where individuals are either stiflers or ignorants. We extend the original hypothesis of homogenously mixed population by allowing for a small-world network embedding the model, in such a way that interactions occur only between nearest-neighbours. This structure is realized starting from a k-regular ring and by inserting, in the average, c additional links in such a way that k and c are tuneable parameters for the population architecture. We prove that this system exhibits a transition between regimes of localization (where the final number of stiflers is at most logarithmic in the population size) and propagation (where the final number of stiflers grows algebraically with the population size) at a finite value of the network parameter c. A quantitative estimate for the critical value of c is obtained via extensive numerical simulations.

  15. Extinction and survival in two-species annihilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amar, J. G.; Ben-Naim, E.; Davis, S. M.

    In this paper, we study diffusion-controlled two-species annihilation with a finite number of particles. In this stochastic process, particles move diffusively, and when two particles of opposite type come into contact, the two annihilate. We focus on the behavior in three spatial dimensions and for initial conditions where particles are confined to a compact domain. Generally, one species outnumbers the other, and we find that the difference between the number of majority and minority species, which is a conserved quantity, controls the behavior. When the number difference exceeds a critical value, the minority becomes extinct and a finite number of majority particles survive, while below this critical difference, a finite number of particles of both species survive. The critical differencemore » $${\\mathrm{{\\Delta}}}_{c}$$ grows algebraically with the total initial number of particles N, and when $$N{\\gg}1$$, the critical difference scales as $${\\mathrm{{\\Delta}}}_{c}{\\sim}{N}^{1/3}$$. Furthermore, when the initial concentrations of the two species are equal, the average number of surviving majority and minority particles $${M}_{+}$$ and $${M}_{{-}}$$, exhibit two distinct scaling behaviors, $${M}_{+}{\\sim}{N}^{1/2}$$ and $${M}_{{-}}{\\sim}{N}^{1/6}$$. Finally, in contrast, when the initial populations are equal, these two quantities are comparable $${M}_{+}{\\sim}{M}_{{-}}{\\sim}{N}^{1/3}$$.« less

  16. Fuel Injector: Air swirl characterization aerothermal modeling, phase 2, volume 2

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Mcdonell, V. G.; Samuelson, G. S.

    1993-01-01

    A well integrated experimental/analytical investigation was conducted to provide benchmark quality data relevant to prefilming type airblast fuel nozzle and its interaction with combustor dome air swirler. The experimental investigation included a systematic study of both single-phase flows that involved single and twin co-axial jets with and without swirl. A two-component Phase Doppler Particle Analyzer (PDPA) equipment was used to document the interaction of single and co-axial air jets with glass beads that simulate nonevaporating spray and simultaneously avoid the complexities associated with fuel atomization processes and attendant issues about the specification of relevant boundary conditions. The interaction of jets with methanol spray produced by practical airblast nozzle was also documented in the spatial domain of practical interest. Model assessment activities included the use of three turbulence models (k-epsilon, algebraic second moment (ASM) and differential second moment (DSM)) for the carrier phase, deterministic or stochastic Lagrangian treatment of the dispersed phase, and advanced numerical schemes. Although qualitatively good comparison with data was obtained for most of the cases investigated, the model deficiencies in regard to modeled dissipation rate transport equation, single length scale, pressure-strain correlation, and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to analytically design combustion systems.

  17. Fuel injector: Air swirl characterization aerothermal modeling, phase 2, volume 1

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Mcdonell, V. G.; Samuelsen, G. S.

    1993-01-01

    A well integrated experimental/analytical investigation was conducted to provide benchmark quality relevant to a prefilming type airblast fuel nozzle and its interaction with the combustor dome air swirler. The experimental investigation included a systematic study of both single-phase flows that involved single and twin co-axial jets with and without swirl. A two-component Phase Doppler Particle Analyzer (PDPA) was used to document the interaction of single and co-axial air jets with glass beads that simulate nonevaporating spray and simultaneously avoid the complexities associated with fuel atomization processes and attendant issues about the specification of relevant boundary conditions. The interaction of jets with methanol spray produced by practical airblast nozzle was also documented in the spatial domain of practical interest. Model assessment activities included the use of three turbulence models (k-epsilon, algebraic second moment (ASM), and differential second moment (DSM)) for the carrier phase, deterministic or stochastic Lagrangian treatment of the dispersed phase, and advanced numerical schemes. Although qualitatively good comparison with data was obtained for most of the cases investigated, the model deficiencies in regard to modeled dissipation rate transport equation, single length scale, pressure-strain correlation, and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to analytically design combustion systems.

  18. Time-ordered product expansions for computational stochastic system biology.

    PubMed

    Mjolsness, Eric

    2013-06-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.

  19. The Two-On-One Stochastic Duel

    DTIC Science & Technology

    1983-12-01

    ACN 67500 TRASANA-TR-43-83 (.0 (v THE TWO-ON-ONE STOCHASTIC DUEL I • Prepared By A.V. Gafarian C.J. Ancker, Jr. DECEMBER 19833D I°"’" " TIC ELECTE...83 M A IL / _ _ 4. TITLE (and Subtitle) TYPE OF REPORT & PERIOD CO\\,ERED The Two-On-One Stochastic Duel Final Report 6. PERFORMING ORG. REPORT NUMBER...Stochastic Duels , Stochastic Processed, and Attrition. 5-14cIa~c fal roLCS-e ss 120. ABSTRACT (C’ntfMte am reverse Ed& if necesemay and idemtitf by block

  20. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  1. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  2. Derivation of kinetic equations from non-Wiener stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Basharov, A. M.

    2013-12-01

    Kinetic differential-difference equations containing terms with fractional derivatives and describing α -stable Levy processes with 0 < α < 1 have been derived in a unified manner in terms of one-dimensional stochastic differential equations controlled merely by the Poisson processes.

  3. Explicating mathematical thinking in differential equations using a computer algebra system

    NASA Astrophysics Data System (ADS)

    Zeynivandnezhad, Fereshteh; Bates, Rachel

    2018-07-01

    The importance of developing students' mathematical thinking is frequently highlighted in literature regarding the teaching and learning of mathematics. Despite this importance, most curricula and instructional activities for undergraduate mathematics fail to bring the learner beyond the mathematics. The purpose of this study was to enhance students' mathematical thinking by implementing a computer algebra system and active learning pedagogical approaches. students' mathematical thinking processes were analyzed while completing specific differential equations tasks based on posed prompts and questions and Instrumental Genesis. Data were collected from 37 engineering students in a public Malaysian university. This study used the descriptive and interpretive qualitative research design to investigate the students' perspectives of emerging mathematical understanding and approaches to learning mathematics in an undergraduate differential equations course. Results of this study concluded that students used a variety of mathematical thinking processes in a non-sequential manner. Additionally, the outcomes provide justification for continued use of technologies such as computer algebra systems in undergraduate mathematics courses and the need for further studies to uncover the various processes students utilize to complete specific mathematical tasks.

  4. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    PubMed

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  5. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  6. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  7. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  8. Data-driven monitoring for stochastic systems and its application on batch process

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Ding, Steven X.; Haghani Abandan Sari, Adel; Hao, Haiyang

    2013-07-01

    Batch processes are characterised by a prescribed processing of raw materials into final products for a finite duration and play an important role in many industrial sectors due to the low-volume and high-value products. Process dynamics and stochastic disturbances are inherent characteristics of batch processes, which cause monitoring of batch processes a challenging problem in practice. To solve this problem, a subspace-aided data-driven approach is presented in this article for batch process monitoring. The advantages of the proposed approach lie in its simple form and its abilities to deal with stochastic disturbances and process dynamics existing in the process. The kernel density estimation, which serves as a non-parametric way of estimating the probability density function, is utilised for threshold calculation. An industrial benchmark of fed-batch penicillin production is finally utilised to verify the effectiveness of the proposed approach.

  9. Stochastic evolutionary voluntary public goods game with punishment in a Quasi-birth-and-death process.

    PubMed

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2017-11-23

    Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.

  10. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  11. Stochastic dynamics and stable equilibrium of evolutionary optional public goods game in finite populations

    NASA Astrophysics Data System (ADS)

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2018-07-01

    Continuous noise caused by mutation is widely present in evolutionary systems. Considering the noise effects and under the optional participation mechanism, a stochastic model for evolutionary public goods game in a finite size population is established. The evolutionary process of strategies in the population is described as a multidimensional ergodic and continuous time Markov process. The stochastic stable state of the system is analyzed by the limit distribution of the stochastic process. By numerical experiments, the influences of the fixed income coefficient for non-participants and the investment income coefficient of the public goods on the stochastic stable equilibrium of the system are analyzed. Through the numerical calculation results, we found that the optional participation mechanism can change the evolutionary dynamics and the equilibrium of the public goods game, and there is a range of parameters which can effectively promote the evolution of cooperation. Further, we obtain the accurate quantitative relationship between the parameters and the probabilities for the system to choose different stable equilibriums, which can be used to realize the control of cooperation.

  12. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  13. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    NASA Astrophysics Data System (ADS)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  14. Automation of Presentation Record Production Based on Rich-Media Technology Using SNT Petri Nets Theory.

    PubMed

    Martiník, Ivo

    2015-01-01

    Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality.

  15. Automation of Presentation Record Production Based on Rich-Media Technology Using SNT Petri Nets Theory

    PubMed Central

    Martiník, Ivo

    2015-01-01

    Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality. PMID:26258164

  16. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049

  17. Research in Stochastic Processes.

    DTIC Science & Technology

    1982-10-31

    Office of Scientific Research Grant AFOSR F49620 82 C 0009 Period: 1 Noveber 1981 through 31 October 1982 Title: Research in Stochastic Processes Co...STA4ATIS CAMBANIS The work briefly described here was developed in connection with problems arising from and related to the statistical comunication

  18. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  19. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  20. Kinetic theory of age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Greenman, Chris D.; Chou, Tom

    2016-01-01

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but are unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Stochastic theories that treat semi-Markov age-dependent processes using, e.g., the Bellman-Harris equation do not resolve a population's age structure and are unable to quantify population-size dependencies. Conversely, current theories that include size-dependent population dynamics (e.g., mathematical models that include carrying capacity such as the logistic equation) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new, fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a Bogoliubov--Born--Green--Kirkwood--Yvon-like hierarchy. Explicit solutions are derived in three limits: no birth, no death, and steady state. These are then compared with their corresponding mean-field results. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution.

  1. Research in Stochastic Processes

    DTIC Science & Technology

    1988-08-31

    stationary sequence, Stochastic Proc. Appl. 29, 1988, 155-169 T. Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary...Nandagopalan, On exceedance point processes for "regular" sample functions, Proc. Volume, Oberxolfach Conf. on Extreme Value Theory, J. Husler and R. Reiss...exceedance point processes for stationary sequences under mild oscillation restrictions, Apr. 88. Obermotfach Conf. on Extremal Value Theory. Ed. J. HUsler

  2. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  3. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  4. Generalized EMV-Effect Algebras

    NASA Astrophysics Data System (ADS)

    Borzooei, R. A.; Dvurečenskij, A.; Sharafi, A. H.

    2018-04-01

    Recently in Dvurečenskij and Zahiri (2017), new algebraic structures, called EMV-algebras which generalize both MV-algebras and generalized Boolean algebras, were introduced. We present equivalent conditions for EMV-algebras. In addition, we define a partial algebraic structure, called a generalized EMV-effect algebra, which is close to generalized MV-effect algebras. Finally, we show that every generalized EMV-effect algebra is either an MV-effect algebra or can be embedded into an MV-effect algebra as a maximal ideal.

  5. The Matrix Pencil and its Applications to Speech Processing

    DTIC Science & Technology

    2007-03-01

    Elementary Linear Algebra ” 8th edition, pp. 278, 2000 John Wiley & Sons, Inc., New York [37] Wai C. Chu, “Speech Coding Algorithms”, New Jeresy: John...Ben; Daniel, James W.; “Applied Linear Algebra ”, pp. 342-345, 1988 Prentice Hall, Englewood Cliffs, NJ [35] Haykin, Simon “Applied Linear Adaptive...ABSTRACT Matrix Pencils facilitate the study of differential equations resulting from oscillating systems. Certain problems in linear ordinary

  6. Balance point characterization of interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M

    2009-07-01

    The individual processes involved in interstitial fluid volume and protein regulation (microvascular filtration, lymphatic return, and interstitial storage) are relatively simple, yet their interaction is exceedingly complex. There is a notable lack of a first-order, algebraic formula that relates interstitial fluid pressure and protein to critical parameters commonly used to characterize the movement of interstitial fluid and protein. Therefore, the purpose of the present study is to develop a simple, transparent, and general algebraic approach that predicts interstitial fluid pressure (P(i)) and protein concentrations (C(i)) that takes into consideration all three processes. Eight standard equations characterizing fluid and protein flux were solved simultaneously to yield algebraic equations for P(i) and C(i) as functions of parameters characterizing microvascular, interstitial, and lymphatic function. Equilibrium values of P(i) and C(i) arise as balance points from the graphical intersection of transmicrovascular and lymph flows (analogous to Guyton's classical cardiac output-venous return curves). This approach goes beyond describing interstitial fluid balance in terms of conservation of mass by introducing the concept of inflow and outflow resistances. Algebraic solutions demonstrate that P(i) and C(i) result from a ratio of the microvascular filtration coefficient (1/inflow resistance) and effective lymphatic resistance (outflow resistance), and P(i) is unaffected by interstitial compliance. These simple algebraic solutions predict P(i) and C(i) that are consistent with reported measurements. The present work therefore presents a simple, transparent, and general balance point characterization of interstitial fluid balance resulting from the interaction of microvascular, interstitial, and lymphatic function.

  7. Online POMDP Algorithms for Very Large Observation Spaces

    DTIC Science & Technology

    2017-06-06

    stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015

  8. An Analysis of Stochastic Duels Involving Fixed Rates of Fire

    DTIC Science & Technology

    The thesis presents an analysis of stochastic duels involving two opposing weapon systems with constant rates of fire. The duel was developed as a...process stochastic duels . The analysis was then extended to the two versus one duel where the three weapon systems were assumed to have fixed rates of fire.

  9. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.

    PubMed

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K

    2011-04-15

    The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.

  10. A kinetic theory for age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Chou, Tom; Greenman, Chris

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but they are structurally unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Conversely, current theories that include size-dependent population dynamics (e.g., carrying capacity) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a BBGKY-like hierarchy. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution. NSF.

  11. Chaotic Expansions of Elements of the Universal Enveloping Superalgebra Associated with a Z2-graded Quantum Stochastic Calculus

    NASA Astrophysics Data System (ADS)

    Eyre, T. M. W.

    Given a polynomial function f of classical stochastic integrator processes whose differentials satisfy a closed Ito multiplication table, we can express the stochastic derivative of f as We establish an analogue of this formula in the form of a chaotic decomposition for Z2-graded theories of quantum stochastic calculus based on the natural coalgebra structure of the universal enveloping superalgebra.

  12. Stochastic dynamics of melt ponds and sea ice-albedo climate feedback

    NASA Astrophysics Data System (ADS)

    Sudakov, Ivan

    Evolution of melt ponds on the Arctic sea surface is a complicated stochastic process. We suggest a low-order model with ice-albedo feedback which describes stochastic dynamics of melt ponds geometrical characteristics. The model is a stochastic dynamical system model of energy balance in the climate system. We describe the equilibria in this model. We conclude the transition in fractal dimension of melt ponds affects the shape of the sea ice albedo curve.

  13. Effects of Stochastic Traffic Flow Model on Expected System Performance

    DTIC Science & Technology

    2012-12-01

    NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs

  14. Continuum analogues of contragredient Lie algebras (Lie algebras with a Cartan operator and nonlinear dynamical systems)

    NASA Astrophysics Data System (ADS)

    Saveliev, M. V.; Vershik, A. M.

    1989-12-01

    We present an axiomatic formulation of a new class of infinitedimensional Lie algebras-the generalizations of Z-graded Lie algebras with, generally speaking, an infinite-dimensional Cartan subalgebra and a contiguous set of roots. We call such algebras “continuum Lie algebras.” The simple Lie algebras of constant growth are encapsulated in our formulation. We pay particular attention to the case when the local algebra is parametrized by a commutative algebra while the Cartan operator (the generalization of the Cartan matrix) is a linear operator. Special examples of these algebras are the Kac-Moody algebras, algebras of Poisson brackets, algebras of vector fields on a manifold, current algebras, and algebras with differential or integro-differential cartan operator. The nonlinear dynamical systems associated with the continuum contragredient Lie algebras are also considered.

  15. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  16. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems such as the dynamical description of quantum phase transition (local- ization) and the numerical stability of the trace-conserving, nonlinear stochastic Liouville equation are outlined.

  17. Inter-species competition-facilitation in stochastic riparian vegetation dynamics.

    PubMed

    Tealdi, Stefano; Camporeale, Carlo; Ridolfi, Luca

    2013-02-07

    Riparian vegetation is a highly dynamic community that lives on river banks and which depends to a great extent on the fluvial hydrology. The stochasticity of the discharge and erosion/deposition processes in fact play a key role in determining the distribution of vegetation along a riparian transect. These abiotic processes interact with biotic competition/facilitation mechanisms, such as plant competition for light, water, and nutrients. In this work, we focus on the dynamics of plants characterized by three components: (1) stochastic forcing due to river discharges, (2) competition for resources, and (3) inter-species facilitation due to the interplay between vegetation and fluid dynamics processes. A minimalist stochastic bio-hydrological model is proposed for the dynamics of the biomass of two vegetation species: one species is assumed dominant and slow-growing, the other is subdominant, but fast-growing. The stochastic model is solved analytically and the probability density function of the plant biomasses is obtained as a function of both the hydrologic and biologic parameters. The impact of the competition/facilitation processes on the distribution of vegetation species along the riparian transect is investigated and remarkable effects are observed. Finally, a good qualitative agreement is found between the model results and field data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Is quantum theory a form of statistical mechanics?

    NASA Astrophysics Data System (ADS)

    Adler, S. L.

    2007-05-01

    We give a review of the basic themes of my recent book: Adler S L 2004 Quantum Theory as an Emergent Phenomenon (Cambridge: Cambridge University Press). We first give motivations for considering the possibility that quantum mechanics is not exact, but is instead an accurate asymptotic approximation to a deeper level theory. For this deeper level, we propose a non-commutative generalization of classical mechanics, that we call "trace dynamics", and we give a brief survey of how it works, considering for simplicity only the bosonic case. We then discuss the statistical mechanics of trace dynamics and give our argument that with suitable approximations, the Ward identities for trace dynamics imply that ensemble averages in the canonical ensemble correspond to Wightman functions in quantum field theory. Thus, quantum theory emerges as the statistical thermodynamics of trace dynamics. Finally, we argue that Brownian motion corrections to this thermodynamics lead to stochastic corrections to the Schrödinger equation, of the type that have been much studied in the "continuous spontaneous localization" model of objective state vector reduction. In appendices to the talk, we give details of the existence of a conserved operator in trace dynamics that encodes the structure of the canonical algebra, of the derivation of the Ward identities, and of the proof that the stochastically-modified Schrödinger equation leads to state vector reduction with Born rule probabilities.

  19. Characteristic operator functions for quantum input-plant-output models and coherent control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, John E.

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less

  20. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  1. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schertzer, Daniel, E-mail: Daniel.Schertzer@enpc.fr; Tchiguirinskaia, Ioulia, E-mail: Ioulia.Tchiguirinskaia@enpc.fr

    In the mid 1980s, the development of multifractal concepts and techniques was an important breakthrough for complex system analysis and simulation, in particular, in turbulence and hydrology. Multifractals indeed aimed to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations or on simplified conceptual models. However, this development has been rather limited to deal with scalar fields, whereas most of the fields of interest are vector-valued or even manifold-valued. We show in this paper that the combination of stable Lévy processes with Clifford algebra is a good candidate to bridge upmore » the present gap between theory and applications. We show that it indeed defines a convenient framework to generate multifractal vector fields, possibly multifractal manifold-valued fields, based on a few fundamental and complementary properties of Lévy processes and Clifford algebra. In particular, the vector structure of these algebra is much more tractable than the manifold structure of symmetry groups while the Lévy stability grants a given statistical universality.« less

  3. Strategies Toward Automation of Overset Structured Surface Grid Generation

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2017-01-01

    An outline of a strategy for automation of overset structured surface grid generation on complex geometries is described. The starting point of the process consists of an unstructured surface triangulation representation of the geometry derived from a native CAD, STEP, or IGES definition, and a set of discretized surface curves that captures all geometric features of interest. The procedure for surface grid generation is decomposed into an algebraic meshing step, a hyperbolic meshing step, and a gap-filling step. This paper will focus primarily on the high-level plan with details on the algebraic step. The algorithmic procedure for the algebraic step involves analyzing the topology of the network of surface curves, distributing grid points appropriately on these curves, identifying domains bounded by four curves that can be meshed algebraically, concatenating the resulting grids into fewer patches, and extending appropriate boundaries of the concatenated grids to provide proper overlap. Results are presented for grids created on various aerospace vehicle components.

  4. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  5. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  6. Modeling stochasticity and robustness in gene regulatory networks.

    PubMed

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  7. O the Derivation of the Schroedinger Equation from Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Wallstrom, Timothy Clarke

    The thesis is divided into four largely independent chapters. The first three chapters treat mathematical problems in the theory of stochastic mechanics. The fourth chapter deals with stochastic mechanisms as a physical theory and shows that the Schrodinger equation cannot be derived from existing formulations of stochastic mechanics, as had previously been believed. Since the drift coefficients of stochastic mechanical diffusions are undefined on the nodes, or zeros of the density, an important problem has been to show that the sample paths stay away from the nodes. In Chapter 1, it is shown that for a smooth wavefunction, the closest approach to the nodes can be bounded solely in terms of the time -integrated energy. The ergodic properties of stochastic mechanical diffusions are greatly complicated by the tendency of the particles to avoid the nodes. In Chapter 2, it is shown that a sufficient condition for a stationary process to be ergodic is that there exist positive t and c such that for all x and y, p^{t} (x,y) > cp(y), and this result is applied to show that the set of spin-1over2 diffusions is uniformly ergodic. In stochastic mechanics, the Bopp-Haag-Dankel diffusions on IR^3times SO(3) are used to represent particles with spin. Nelson has conjectured that in the limit as the particle's moment of inertia I goes to zero, the projections of the Bopp -Haag-Dankel diffusions onto IR^3 converge to a Markovian limit process. This conjecture is proved for the spin-1over2 case in Chapter 3, and the limit process identified as the diffusion naturally associated with the solution to the regular Pauli equation. In Chapter 4 it is shown that the general solution of the stochastic Newton equation does not correspond to a solution of the Schrodinger equation, and that there are solutions to the Schrodinger equation which do not satisfy the Guerra-Morato Lagrangian variational principle. These observations are shown to apply equally to other existing formulations of stochastic mechanics, and it is argued that these difficulties represent fundamental inadequacies in the physical foundation of stochastic mechanics.

  8. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  9. Data-Driven Process Discovery: A Discrete Time Algebra for Relational Signal Analysis

    DTIC Science & Technology

    1996-12-01

    would also like to thank Dr. Mark Oxley for his assistance in developing this abstract algebra and the mathematical notation found herein. Lastly, I... Mathematical Result.. 4-13 4.4. Demostration of Coefficient Signature Additon ........................ 4-14 4.5. Multivariate Relational Discovery...spaces with the recognition of cues in a specific space" [21]. Up to now, most of the Artificial Intelligence (Al) ’discovery’ work has emphasized one

  10. Electrokinetics Models for Micro and Nano Fluidic Impedance Sensors

    DTIC Science & Technology

    2010-11-01

    primitive Differential-Algebraic Equations (DAEs), used to process and interpret the experimentally measured electrical impedance data (Sun and Morgan...field, and species respectively. A second-order scheme was used to calculate the ionic species distribution. The linearized algebraic equations were...is governed by the Poisson equation 2 0 0 r i i i F z cε ε φ∇ + =∑ where ε0 and εr are, respectively, the electrical permittivity in the vacuum

  11. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  12. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  13. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  14. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks

    PubMed Central

    Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.

    2015-01-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406

  15. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.

    PubMed

    Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M

    2015-09-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.

  16. Stochastic flow shop scheduling of overlapping jobs on tandem machines in application to optimizing the US Army's deliberate nuclear, biological, and chemical decontamination process, (final report). Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, V.

    1991-05-01

    The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less

  17. Unified picture of strong-coupling stochastic thermodynamics and time reversals

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    Strong-coupling statistical thermodynamics is formulated as the Hamiltonian dynamics of an observed system interacting with another unobserved system (a bath). It is shown that the entropy production functional of stochastic thermodynamics, defined as the log ratio of forward and backward system path probabilities, is in a one-to-one relation with the log ratios of the joint initial conditions of the system and the bath. A version of strong-coupling statistical thermodynamics where the system-bath interaction vanishes at the beginning and at the end of a process is, as is also weak-coupling stochastic thermodynamics, related to the bath initially in equilibrium by itself. The heat is then the change of bath energy over the process, and it is discussed when this heat is a functional of the system history alone. The version of strong-coupling statistical thermodynamics introduced by Seifert and Jarzynski is related to the bath initially in conditional equilibrium with respect to the system. This leads to heat as another functional of the system history which needs to be determined by thermodynamic integration. The log ratio of forward and backward system path probabilities in a stochastic process is finally related to log ratios of the initial conditions of a combined system and bath. It is shown that the entropy production formulas of stochastic processes under a general class of time reversals are given by the differences of bath energies in a larger underlying Hamiltonian system. The paper highlights the centrality of time reversal in stochastic thermodynamics, also in the case of strong coupling.

  18. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  19. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  20. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  1. Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II

    NASA Technical Reports Server (NTRS)

    Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael

    2008-01-01

    Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.

  2. Performance assessment in algebra learning process

    NASA Astrophysics Data System (ADS)

    Lestariani, Ida; Sujadi, Imam; Pramudya, Ikrar

    2017-12-01

    The purpose of research to describe the implementation of performance assessment on algebra learning process. The subject in this research is math educator of SMAN 1 Ngawi class X. This research includes descriptive qualitative research type. Techniques of data collecting are done by observation method, interview, and documentation. Data analysis technique is done by data reduction, data presentation, and conclusion. The results showed any indication that the steps taken by the educator in applying the performance assessment are 1) preparing individual worksheets and group worksheets, 2) preparing rubric assessments for independent worksheets and groups and 3) making performance assessments rubric to learners’ performance results with individual or groups task.

  3. Structuring students’ analogical reasoning in solving algebra problem

    NASA Astrophysics Data System (ADS)

    Lailiyah, S.; Nusantara, T.; Sa'dijah, C.; Irawan, E. B.; Kusaeri; Asyhar, A. H.

    2018-01-01

    The average achievement of Indonesian students’ mathematics skills according to Benchmark International Trends in Mathematics and Science Study (TIMSS) is ranked at the 38th out of 42 countries and according to the survey result in Program for International Student Assessment (PISA) is ranked at the 64th out of 65 countries. The low mathematics skill of Indonesian student has become an important reason to research more deeply about reasoning and algebra in mathematics. Analogical reasoning is a very important component in mathematics because it is the key to creativity and it can make the learning process in the classroom become effective. The major part of the analogical reasoning is about structuring including the processes of inferencing and decision-making happens. Those processes involve base domain and target domain. Methodologically, the subjects of this research were 42 students from class XII. The sources of data were derived from the results of thinks aloud, the transcribed interviews, and the videos taken while the subject working on the instruments and interviews. The collected data were analyzed using qualitative techniques. The result of this study described the structuring characteristics of students’ analogical reasoning in solving algebra problems from all the research subjects.

  4. Valuation of Capabilities and System Architecture Options to Meet Affordability Requirement

    DTIC Science & Technology

    2014-04-30

    is an extension of the historic volatility and trend of the stock using Brownian motion . In finance , the Black-Scholes equation is used to value...the underlying asset whose value is modeled as a stochastic process. In finance , the underlying asset is a tradeable stock and the stochastic process

  5. On a Result for Finite Markov Chains

    ERIC Educational Resources Information Center

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  6. Development of abstract mathematical reasoning: the case of algebra

    PubMed Central

    Susac, Ana; Bubic, Andreja; Vrbanc, Andrija; Planinic, Maja

    2014-01-01

    Algebra typically represents the students’ first encounter with abstract mathematical reasoning and it therefore causes significant difficulties for students who still reason concretely. The aim of the present study was to investigate the developmental trajectory of the students’ ability to solve simple algebraic equations. 311 participants between the ages of 13 and 17 were given a computerized test of equation rearrangement. Equations consisted of an unknown and two other elements (numbers or letters), and the operations of multiplication/division. The obtained results showed that younger participants are less accurate and slower in solving equations with letters (symbols) than those with numbers. This difference disappeared for older participants (16–17 years), suggesting that they had reached an abstract reasoning level, at least for this simple task. A corresponding conclusion arises from the analysis of their strategies which suggests that younger participants mostly used concrete strategies such as inserting numbers, while older participants typically used more abstract, rule-based strategies. These results indicate that the development of algebraic thinking is a process which unfolds over a long period of time. In agreement with previous research, we can conclude that, on average, children at the age of 15–16 transition from using concrete to abstract strategies while solving the algebra problems addressed within the present study. A better understanding of the timing and speed of students’ transition from concrete arithmetic reasoning to abstract algebraic reasoning might help in designing better curricula and teaching materials that would ease that transition. PMID:25228874

  7. Development of abstract mathematical reasoning: the case of algebra.

    PubMed

    Susac, Ana; Bubic, Andreja; Vrbanc, Andrija; Planinic, Maja

    2014-01-01

    Algebra typically represents the students' first encounter with abstract mathematical reasoning and it therefore causes significant difficulties for students who still reason concretely. The aim of the present study was to investigate the developmental trajectory of the students' ability to solve simple algebraic equations. 311 participants between the ages of 13 and 17 were given a computerized test of equation rearrangement. Equations consisted of an unknown and two other elements (numbers or letters), and the operations of multiplication/division. The obtained results showed that younger participants are less accurate and slower in solving equations with letters (symbols) than those with numbers. This difference disappeared for older participants (16-17 years), suggesting that they had reached an abstract reasoning level, at least for this simple task. A corresponding conclusion arises from the analysis of their strategies which suggests that younger participants mostly used concrete strategies such as inserting numbers, while older participants typically used more abstract, rule-based strategies. These results indicate that the development of algebraic thinking is a process which unfolds over a long period of time. In agreement with previous research, we can conclude that, on average, children at the age of 15-16 transition from using concrete to abstract strategies while solving the algebra problems addressed within the present study. A better understanding of the timing and speed of students' transition from concrete arithmetic reasoning to abstract algebraic reasoning might help in designing better curricula and teaching materials that would ease that transition.

  8. Stochastic resonance effects reveal the neural mechanisms of transcranial magnetic stimulation

    PubMed Central

    Schwarzkopf, Dietrich Samuel; Silvanto, Juha; Rees, Geraint

    2011-01-01

    Transcranial magnetic stimulation (TMS) is a popular method for studying causal relationships between neural activity and behavior. However its mode of action remains controversial, and so far there is no framework to explain its wide range of facilitatory and inhibitory behavioral effects. While some theoretical accounts suggests that TMS suppresses neuronal processing, other competing accounts propose that the effects of TMS result from the addition of noise to neuronal processing. Here we exploited the stochastic resonance phenomenon to distinguish these theoretical accounts and determine how TMS affects neuronal processing. Specifically, we showed that online TMS can induce stochastic resonance in the human brain. At low intensity, TMS facilitated the detection of weak motion signals but with higher TMS intensities and stronger motion signals we found only impairment in detection. These findings suggest that TMS acts by adding noise to neuronal processing, at least in an online TMS protocol. Importantly, such stochastic resonance effects may also explain why TMS parameters that under normal circumstances impair behavior, can induce behavioral facilitations when the stimulated area is in an adapted or suppressed state. PMID:21368025

  9. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  10. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  11. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Guang J.

    2016-09-29

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  12. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Guang J.

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  13. Abstract numeric relations and the visual structure of algebra.

    PubMed

    Landy, David; Brookes, David; Smout, Ryan

    2014-09-01

    Formal algebras are among the most powerful and general mechanisms for expressing quantitative relational statements; yet, even university engineering students, who are relatively proficient with algebraic manipulation, struggle with and often fail to correctly deploy basic aspects of algebraic notation (Clement, 1982). In the cognitive tradition, it has often been assumed that skilled users of these formalisms treat situations in terms of semantic properties encoded in an abstract syntax that governs the use of notation without particular regard to the details of the physical structure of the equation itself (Anderson, 2005; Hegarty, Mayer, & Monk, 1995). We explore how the notational structure of verbal descriptions or algebraic equations (e.g., the spatial proximity of certain words or the visual alignment of numbers and symbols in an equation) plays a role in the process of interpreting or constructing symbolic equations. We propose in particular that construction processes involve an alignment of notational structures across representation systems, biasing reasoners toward the selection of formal notations that maintain the visuospatial structure of source representations. For example, in the statement "There are 5 elephants for every 3 rhinoceroses," the spatial proximity of 5 and elephants and 3 and rhinoceroses will bias reasoners to write the incorrect expression 5E = 3R, because that expression maintains the spatial relationships encoded in the source representation. In 3 experiments, participants constructed equations with given structure, based on story problems with a variety of phrasings. We demonstrate how the notational alignment approach accounts naturally for a variety of previously reported phenomena in equation construction and successfully predicts error patterns that are not accounted for by prior explanations, such as the left to right transcription heuristic.

  14. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  15. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  16. Resolving Phase Ambiguities in the Calibration of Redundant Interferometric Arrays: Implications for Array Design

    DTIC Science & Technology

    2016-03-04

    summary of the linear algebra involved. As we have seen, the RSC process begins with the interferometric phase measurement β, which due to wrapping will...mentary Divisors) in Section 2 and the following defi- nition of the matrix determinant. This definition is given in many linear algebra texts (see...principle solve for a particular solution of this system by arbitrarily setting two object phases (whose spatial frequencies are not co- linear ) and one

  17. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium

    NASA Astrophysics Data System (ADS)

    Horowitz, Jordan M.

    2015-07-01

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  18. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium.

    PubMed

    Horowitz, Jordan M

    2015-07-28

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  19. Asymmetric and Stochastic Behavior in Magnetic Vortices Studied by Soft X-ray Microscopy

    NASA Astrophysics Data System (ADS)

    Im, Mi-Young

    Asymmetry and stochasticity in spin processes are not only long-standing fundamental issues but also highly relevant to technological applications of nanomagnetic structures to memory and storage nanodevices. Those nontrivial phenomena have been studied by direct imaging of spin structures in magnetic vortices utilizing magnetic transmission soft x-ray microscopy (BL6.1.2 at ALS). Magnetic vortices have attracted enormous scientific interests due to their fascinating spin structures consisting of circularity rotating clockwise (c = + 1) or counter-clockwise (c = -1) and polarity pointing either up (p = + 1) or down (p = -1). We observed a symmetry breaking in the formation process of vortex structures in circular permalloy (Ni80Fe20) disks. The generation rates of two different vortex groups with the signature of cp = + 1 and cp =-1 are completely asymmetric. The asymmetric nature was interpreted to be triggered by ``intrinsic'' Dzyaloshinskii-Moriya interaction (DMI) arising from the spin-orbit coupling due to the lack of inversion symmetry near the disk surface and ``extrinsic'' factors such as roughness and defects. We also investigated the stochastic behavior of vortex creation in the arrays of asymmetric disks. The stochasticity was found to be very sensitive to the geometry of disk arrays, particularly interdisk distance. The experimentally observed phenomenon couldn't be explained by thermal fluctuation effect, which has been considered as a main reason for the stochastic behavior in spin processes. We demonstrated for the first time that the ultrafast dynamics at the early stage of vortex creation, which has a character of classical chaos significantly affects the stochastic nature observed at the steady state in asymmetric disks. This work provided the new perspective of dynamics as a critical factor contributing to the stochasticity in spin processes and also the possibility for the control of the intrinsic stochastic nature by optimizing the design of asymmetric disk arrays. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, by Leading Foreign Research Institute Recruitment Program through the NRF.

  20. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  1. Conference on Stochastic Processes and Their Applications (12th) held at Ithaca, New York on 11-15 Jul 83,

    DTIC Science & Technology

    1983-07-15

    RD- R136 626 CONFERENCE ON STOCHASTIC PROCESSES AND THEIR APPLICATIONS (12TH> JULY 11 15 1983 ITHACA NEW YORK(U) CORNELL UNIV ITHACA NY 15 JUL 83...oscillator phase Instability" 2t53 - 3s15 p.m. M.N. GOPALAN, Indian Institute of Technoloy, Bombay "Cost benefit analysis of systems subject to inspection...p.m. W. KLIEDANN, Univ. Bremen, Fed. Rep. Germany "Controllability of stochastic systems 8sO0 - lOsO0 p.m. RECEPTION Johnson Art Museum ’q % , t

  2. Variational processes and stochastic versions of mechanics

    NASA Astrophysics Data System (ADS)

    Zambrini, J. C.

    1986-09-01

    The dynamical structure of any reasonable stochastic version of classical mechanics is investigated, including the version created by Nelson [E. Nelson, Quantum Fluctuations (Princeton U.P., Princeton, NJ, 1985); Phys. Rev. 150, 1079 (1966)] for the description of quantum phenomena. Two different theories result from this common structure. One of them is the imaginary time version of Nelson's theory, whose existence was unknown, and yields a radically new probabilistic interpretation of the heat equation. The existence and uniqueness of all the involved stochastic processes is shown under conditions suggested by the variational approach of Yasue [K. Yasue, J. Math. Phys. 22, 1010 (1981)].

  3. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  4. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  5. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  6. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  7. Simulations of Technology-Induced and Crisis-Led Stochastic and Chaotic Fluctuations in Higher Education Processes: A Model and a Case Study for Performance and Expected Employment

    ERIC Educational Resources Information Center

    Ahmet, Kara

    2015-01-01

    This paper presents a simple model of the provision of higher educational services that considers and exemplifies nonlinear, stochastic, and potentially chaotic processes. I use the methods of system dynamics to simulate these processes in the context of a particular sociologically interesting case, namely that of the Turkish higher education…

  8. General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems

    DTIC Science & Technology

    1988-01-01

    P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans

  9. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  10. Study on Stationarity of Random Load Spectrum Based on the Special Road

    NASA Astrophysics Data System (ADS)

    Yan, Huawen; Zhang, Weigong; Wang, Dong

    2017-09-01

    In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.

  11. Virasoro algebra in the KN algebra; Bosonic string with fermionic ghosts on Riemann surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koibuchi, H.

    1991-10-10

    In this paper the bosonic string model with fermionic ghosts is considered in the framework of the KN algebra. The authors' attentions are paid to representations of KN algebra and a Clifford algebra of the ghosts. The authors show that a Virasoro-like algebra is obtained from KN algebra when KN algebra has certain antilinear anti-involution, and that it is isomorphic to the usual Virasoro algebra. The authors show that there is an expected relation between a central charge of this Virasoro-like algebra and an anomaly of the combined system.

  12. A Stochastic Detection and Retrieval Model for the Study of Metacognition

    ERIC Educational Resources Information Center

    Jang, Yoonhee; Wallsten, Thomas S.; Huber, David E.

    2012-01-01

    We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to…

  13. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  14. Stochastic Multiscale Analysis and Design of Engine Disks

    DTIC Science & Technology

    2010-07-28

    shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University

  15. Transcriptional dynamics with time-dependent reaction rates

    NASA Astrophysics Data System (ADS)

    Nandi, Shubhendu; Ghosh, Anandamohan

    2015-02-01

    Transcription is the first step in the process of gene regulation that controls cell response to varying environmental conditions. Transcription is a stochastic process, involving synthesis and degradation of mRNAs, that can be modeled as a birth-death process. We consider a generic stochastic model, where the fluctuating environment is encoded in the time-dependent reaction rates. We obtain an exact analytical expression for the mRNA probability distribution and are able to analyze the response for arbitrary time-dependent protocols. Our analytical results and stochastic simulations confirm that the transcriptional machinery primarily act as a low-pass filter. We also show that depending on the system parameters, the mRNA levels in a cell population can show synchronous/asynchronous fluctuations and can deviate from Poisson statistics.

  16. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis

    NASA Astrophysics Data System (ADS)

    Jia, Ningning; Y Lam, Edmund

    2010-04-01

    Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.

  17. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  18. Smooth function approximation using neural networks.

    PubMed

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  19. Mathematical Modeling for Inherited Diseases.

    PubMed

    Anis, Saima; Khan, Madad; Khan, Saqib

    2017-01-01

    We introduced a new nonassociative algebra, namely, left almost algebra, and discussed some of its genetic properties. We discussed the relation of this algebra with flexible algebra, Jordan algebra, and generalized Jordan algebra.

  20. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  1. Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes

    NASA Astrophysics Data System (ADS)

    González Arenas, Zochil; Barci, Daniel G.

    2012-12-01

    Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.

  2. Mathematical Modeling for Inherited Diseases

    PubMed Central

    Khan, Saqib

    2017-01-01

    We introduced a new nonassociative algebra, namely, left almost algebra, and discussed some of its genetic properties. We discussed the relation of this algebra with flexible algebra, Jordan algebra, and generalized Jordan algebra. PMID:28781606

  3. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  4. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    DTIC Science & Technology

    2015-11-30

    matrix determinant. This definition is given in many linear algebra texts (see e.g. Bretscher (2001)). Definition 3.1 : Suppose we have an n-by-n...Processing, 2, 767 Blanchard P., Greenaway A., Anderton R., Appleby R., 1996, J. Opt. Soc. Am. A, 13, 1593 Bretscher O., 2001, Linear Algebra with...frequencies are not co- linear ) and one piston phase. This particular solution will then differ from the true solution by a phase ramp in the Fourier

  5. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  6. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  7. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  8. Granger causality for state-space models

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Seth, Anil K.

    2015-04-01

    Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.

  9. A simple theory of motor protein kinetics and energetics. II.

    PubMed

    Qian, H

    2000-01-10

    A three-state stochastic model of motor protein [Qian, Biophys. Chem. 67 (1997) pp. 263-267] is further developed to illustrate the relationship between the external load on an individual motor protein in aqueous solution with various ATP concentrations and its steady-state velocity. A wide variety of dynamic motor behavior are obtained from this simple model. For the particular case of free-load translocation being the most unfavorable step within the hydrolysis cycle, the load-velocity curve is quasi-linear, V/Vmax = (cF/Fmax-c)/(1-c), in contrast to the hyperbolic relationship proposed by A.V. Hill for macroscopic muscle. Significant deviation from the linearity is expected when the velocity is less than 10% of its maximal (free-load) value--a situation under which the processivity of motor diminishes and experimental observations are less certain. We then investigate the dependence of load-velocity curve on ATP (ADP) concentration. It is shown that the free load Vmax exhibits a Michaelis-Menten like behavior, and the isometric Fmax increases linearly with ln([ATP]/[ADP]). However, the quasi-linear region is independent of the ATP concentration, yielding an apparently ATP-independent maximal force below the true isometric force. Finally, the heat production as a function of ATP concentration and external load are calculated. In simple terms and solved with elementary algebra, the present model provides an integrated picture of biochemical kinetics and mechanical energetics of motor proteins.

  10. Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.

    PubMed

    Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A

    2007-12-01

    By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.

  11. Investigation of formation of cut off layers and productivity of screw milling process

    NASA Astrophysics Data System (ADS)

    Ambrosimov, S. K.; Morozova, A. V.

    2018-03-01

    The article presents studies of a new method for complex milling surfaces with a screw feed motion. Using the apparatus of algebra of logic, the process of formation of cut metal layers and processing capacity is presented.

  12. A Practical Approach to Implementing Real-Time Semantics

    NASA Technical Reports Server (NTRS)

    Luettgen, Gerald; Bhat, Girish; Cleaveland, Rance

    1999-01-01

    This paper investigates implementations of process algebras which are suitable for modeling concurrent real-time systems. It suggests an approach for efficiently implementing real-time semantics using dynamic priorities. For this purpose a proces algebra with dynamic priority is defined, whose semantics corresponds one-to-one to traditional real-time semantics. The advantage of the dynamic-priority approach is that it drastically reduces the state-space sizes of the systems in question while preserving all properties of their functional and real-time behavior. The utility of the technique is demonstrated by a case study which deals with the formal modeling and verification of the SCSI-2 bus-protocol. The case study is carried out in the Concurrency Workbench of North Carolina, an automated verification tool in which the process algebra with dynamic priority is implemented. It turns out that the state space of the bus-protocol model is about an order of magnitude smaller than the one resulting from real-time semantics. The accuracy of the model is proved by applying model checking for verifying several mandatory properties of the bus protocol.

  13. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  14. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1994-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  15. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1996-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  16. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  17. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  18. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  19. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  20. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems.

    PubMed

    Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E

    2011-12-22

    Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

  1. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems

    PubMed Central

    2011-01-01

    Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces. PMID:22369688

  2. 1/f Noise from nonlinear stochastic differential equations.

    PubMed

    Ruseckas, J; Kaulakys, B

    2010-03-01

    We consider a class of nonlinear stochastic differential equations, giving the power-law behavior of the power spectral density in any desirably wide range of frequency. Such equations were obtained starting from the point process models of 1/fbeta noise. In this article the power-law behavior of spectrum is derived directly from the stochastic differential equations, without using the point process models. The analysis reveals that the power spectrum may be represented as a sum of the Lorentzian spectra. Such a derivation provides additional justification of equations, expands the class of equations generating 1/fbeta noise, and provides further insights into the origin of 1/fbeta noise.

  3. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  4. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  5. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects

    PubMed Central

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993

  6. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.

    PubMed

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.

  7. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    PubMed

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  8. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  9. A Simplified Treatment of Brownian Motion and Stochastic Differential Equations Arising in Financial Mathematics

    ERIC Educational Resources Information Center

    Parlar, Mahmut

    2004-01-01

    Brownian motion is an important stochastic process used in modelling the random evolution of stock prices. In their 1973 seminal paper--which led to the awarding of the 1997 Nobel prize in Economic Sciences--Fischer Black and Myron Scholes assumed that the random stock price process is described (i.e., generated) by Brownian motion. Despite its…

  10. Mathematics Education. Selected Papers from the Conference on Stochastic Processes and Their Applications. (15th, Nagoya, Japan, July 2-5, 1985).

    ERIC Educational Resources Information Center

    Hida, Takeyuki; Shimizu, Akinobu

    This volume contains the papers and comments from the Workshop on Mathematics Education, a special session of the 15th Conference on Stochastic Processes and Their Applications, held in Nagoya, Japan, July 2-5, 1985. Topics covered include: (1) probability; (2) statistics; (3) deviation; (4) Japanese mathematics curriculum; (5) statistical…

  11. Quantum stochastic walks on networks for decision-making.

    PubMed

    Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo

    2016-03-31

    Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.

  12. Intrinsic Information Processing and Energy Dissipation in Stochastic Input-Output Dynamical Systems

    DTIC Science & Technology

    2015-07-09

    Crutchfield. Information Anatomy of Stochastic Equilibria, Entropy , (08 2014): 0. doi: 10.3390/e16094713 Virgil Griffith, Edwin Chong, Ryan James...Christopher Ellison, James Crutchfield. Intersection Information Based on Common Randomness, Entropy , (04 2014): 0. doi: 10.3390/e16041985 TOTAL: 5 Number...Learning Group Seminar, Complexity Sciences Center, UC Davis. Korana Burke and Greg Wimsatt (UCD), reviewed PRL “Measurement of Stochastic Entropy

  13. Stochastically gated local and occupation times of a Brownian particle

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-01-01

    We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.

  14. Approximation methods of European option pricing in multiscale stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.

  15. A note on derivations of Murray-von Neumann algebras.

    PubMed

    Kadison, Richard V; Liu, Zhe

    2014-02-11

    A Murray-von Neumann algebra is the algebra of operators affiliated with a finite von Neumann algebra. In this article, we first present a brief introduction to the theory of derivations of operator algebras from both the physical and mathematical points of view. We then describe our recent work on derivations of Murray-von Neumann algebras. We show that the "extended derivations" of a Murray-von Neumann algebra, those that map the associated finite von Neumann algebra into itself, are inner. In particular, we prove that the only derivation that maps a Murray-von Neumann algebra associated with a factor of type II1 into that factor is 0. Those results are extensions of Singer's seminal result answering a question of Kaplansky, as applied to von Neumann algebras: The algebra may be noncommutative and may even contain unbounded elements.

  16. The response analysis of fractional-order stochastic system via generalized cell mapping method.

    PubMed

    Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei

    2018-01-01

    This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.

  17. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  18. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  19. The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.

    2007-06-01

    We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.

  20. Banach Synaptic Algebras

    NASA Astrophysics Data System (ADS)

    Foulis, David J.; Pulmannov, Sylvia

    2018-04-01

    Using a representation theorem of Erik Alfsen, Frederic Schultz, and Erling Størmer for special JB-algebras, we prove that a synaptic algebra is norm complete (i.e., Banach) if and only if it is isomorphic to the self-adjoint part of a Rickart C∗-algebra. Also, we give conditions on a Banach synaptic algebra that are equivalent to the condition that it is isomorphic to the self-adjoint part of an AW∗-algebra. Moreover, we study some relationships between synaptic algebras and so-called generalized Hermitian algebras.

  1. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  2. Quantum processes: A Whiteheadian interpretation of quantum field theory

    NASA Astrophysics Data System (ADS)

    Bain, Jonathan

    Quantum processes: A Whiteheadian interpretation of quantum field theory is an ambitious and thought-provoking exercise in physics and metaphysics, combining an erudite study of the very complex metaphysics of A.N. Whitehead with a well-informed discussion of contemporary issues in the philosophy of algebraic quantum field theory. Hättich's overall goal is to construct an interpretation of quantum field theory. He does this by translating key concepts in Whitehead's metaphysics into the language of algebraic quantum field theory. In brief, this Hättich-Whitehead (H-W, hereafter) interpretation takes "actual occasions" as the fundamental ontological entities of quantum field theory. An actual occasion is the result of two types of processes: a "transition process" in which a set of initial possibly-possessed properties for the occasion (in the form of "eternal objects") is localized to a space-time region; and a "concrescence process" in which a subset of these initial possibly-possessed properties is selected and actualized to produce the occasion. Essential to these processes is the "underlying activity", which conditions the way in which properties are initially selected and subsequently actualized. In short, under the H-W interpretation of quantum field theory, an initial set of possibly-possessed eternal objects is represented by a Boolean sublattice of the lattice of projection operators determined by a von Neumann algebra R (O) associated with a region O of Minkowski space-time, and the underlying activity is represented by a state on R (O) obtained by conditionalizing off of the vacuum state. The details associated with the H-W interpretation involve imposing constraints on these representations motivated by principles found in Whitehead's metaphysics. These details are spelled out in the three sections of the book. The first section is a summary and critique of Whitehead's metaphysics, the second section introduces the formalism of algebraic quantum field theory, and the third section consists of a translation between the first two sections. This review will concentrate on the first and third sections, with an eye on making explicit the essential characteristics of the H-W interpretation.

  3. Hopf algebras of rooted forests, cocyles, and free Rota-Baxter algebras

    NASA Astrophysics Data System (ADS)

    Zhang, Tianjie; Gao, Xing; Guo, Li

    2016-10-01

    The Hopf algebra and the Rota-Baxter algebra are the two algebraic structures underlying the algebraic approach of Connes and Kreimer to renormalization of perturbative quantum field theory. In particular, the Hopf algebra of rooted trees serves as the "baby model" of Feynman graphs in their approach and can be characterized by certain universal properties involving a Hochschild 1-cocycle. Decorated rooted trees have also been applied to study Feynman graphs. We will continue the study of universal properties of various spaces of decorated rooted trees with such a 1-cocycle, leading to the concept of a cocycle Hopf algebra. We further apply the universal properties to equip a free Rota-Baxter algebra with the structure of a cocycle Hopf algebra.

  4. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation

    PubMed Central

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms. PMID:29670508

  5. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation.

    PubMed

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO 2 ) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.

  6. Robust algebraic image enhancement for intelligent control systems

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  7. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  8. On convergence of the unscented Kalman-Bucy filter using contraction theory

    NASA Astrophysics Data System (ADS)

    Maree, J. P.; Imsland, L.; Jouffroy, J.

    2016-06-01

    Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.

  9. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  10. The Unitality of Quantum B-algebras

    NASA Astrophysics Data System (ADS)

    Han, Shengwei; Xu, Xiaoting; Qin, Feng

    2018-02-01

    Quantum B-algebras as a generalization of quantales were introduced by Rump and Yang, which cover the majority of implicational algebras and provide a unified semantic for a wide class of substructural logics. Unital quantum B-algebras play an important role in the classification of implicational algebras. The main purpose of this paper is to construct unital quantum B-algebras from non-unital quantum B-algebras.

  11. Explorations in fuzzy physics and non-commutative geometry

    NASA Astrophysics Data System (ADS)

    Kurkcuoglu, Seckin

    Fuzzy spaces arise as discrete approximations to continuum manifolds. They are usually obtained through quantizing coadjoint orbits of compact Lie groups and they can be described in terms of finite-dimensional matrix algebras, which for large matrix sizes approximate the algebra of functions of the limiting continuum manifold. Their ability to exactly preserve the symmetries of their parent manifolds is especially appealing for physical applications. Quantum Field Theories are built over them as finite-dimensional matrix models preserving almost all the symmetries of their respective continuum models. In this dissertation, we first focus our attention to the study of fuzzy supersymmetric spaces. In this regard, we obtain the fuzzy supersphere S2,2F through quantizing the supersphere, and demonstrate that it has exact supersymmetry. We derive a finite series formula for the *-product of functions over S2,2F and analyze the differential geometric information encoded in this formula. Subsequently, we show that quantum field theories on S2,2F are realized as finite-dimensional supermatrix models, and in particular we obtain the non-linear sigma model over the fuzzy supersphere by constructing the fuzzy supersymmetric extensions of a certain class of projectors. We show that this model too, is realized as a finite-dimensional supermatrix model with exact supersymmetry. Next, we show that fuzzy spaces have a generalized Hopf algebra structure. By focusing on the fuzzy sphere, we establish that there is a *-homomorphism from the group algebra SU(2)* of SU(2) to the fuzzy sphere. Using this and the canonical Hopf algebra structure of SU(2)* we show that both the fuzzy sphere and their direct sum are Hopf algebras. Using these results, we discuss processes in which a fuzzy sphere with angular momenta J splits into fuzzy spheres with angular momenta K and L. Finally, we study the formulation of Chern-Simons (CS) theory on an infinite strip of the non-commutative plane. We develop a finite-dimensional matrix model, whose large size limit approximates the CS theory on the infinite strip, and show that there are edge observables in this model obeying a finite-dimensional Lie algebra, that resembles the Kac-Moody algebra.

  12. Random-order fractional bistable system and its stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia

    2017-01-01

    In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.

  13. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  14. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  15. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. Quantum incompatibility of channels with general outcome operator algebras

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui

    2018-04-01

    A pair of quantum channels is said to be incompatible if they cannot be realized as marginals of a single channel. This paper addresses the general structure of the incompatibility of completely positive channels with a fixed quantum input space and with general outcome operator algebras. We define a compatibility relation for such channels by identifying the composite outcome space as the maximal (projective) C*-tensor product of outcome algebras. We show theorems that characterize this compatibility relation in terms of the concatenation and conjugation of channels, generalizing the recent result for channels with quantum outcome spaces. These results are applied to the positive operator valued measures (POVMs) by identifying each of them with the corresponding quantum-classical (QC) channel. We also give a characterization of the maximality of a POVM with respect to the post-processing preorder in terms of the conjugate channel of the QC channel. We consider another definition of compatibility of normal channels by identifying the composite outcome space with the normal tensor product of the outcome von Neumann algebras. We prove that for a given normal channel, the class of normally compatible channels is upper bounded by a special class of channels called tensor conjugate channels. We show the inequivalence of the C*- and normal compatibility relations for QC channels, which originates from the possibility and impossibility of copying operations for commutative von Neumann algebras in C*- and normal compatibility relations, respectively.

  17. Generalizing the bms3 and 2D-conformal algebras by expanding the Virasoro algebra

    NASA Astrophysics Data System (ADS)

    Caroca, Ricardo; Concha, Patrick; Rodríguez, Evelyn; Salgado-Rebolledo, Patricio

    2018-03-01

    By means of the Lie algebra expansion method, the centrally extended conformal algebra in two dimensions and the bms3 algebra are obtained from the Virasoro algebra. We extend this result to construct new families of expanded Virasoro algebras that turn out to be infinite-dimensional lifts of the so-called Bk, Ck and Dk algebras recently introduced in the literature in the context of (super)gravity. We also show how some of these new infinite-dimensional symmetries can be obtained from expanded Kač-Moody algebras using modified Sugawara constructions. Applications in the context of three-dimensional gravity are briefly discussed.

  18. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  19. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  20. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Uhi Rinn, E-mail: uhrisu1@math.snu.ac.kr

    We introduce a classical BRST complex (See Definition 3.2.) and show that one can construct a classical affine W-algebra via the complex. This definition clarifies that classical affine W-algebras can be considered as quasi-classical limits of quantum affine W-algebras. We also give a definition of a classical affine fractional W-algebra as a Poisson vertex algebra. As in the classical affine case, a classical affine fractional W-algebra has two compatible λ-brackets and is isomorphic to an algebra of differential polynomials as a differential algebra. When a classical affine fractional W-algebra is associated to a minimal nilpotent, we describe explicit forms ofmore » free generators and compute λ-brackets between them. Provided some assumptions on a classical affine fractional W-algebra, we find an infinite sequence of integrable systems related to the algebra, using the generalized Drinfel’d and Sokolov reduction.« less

  2. A note on derivations of Murray–von Neumann algebras

    PubMed Central

    Kadison, Richard V.; Liu, Zhe

    2014-01-01

    A Murray–von Neumann algebra is the algebra of operators affiliated with a finite von Neumann algebra. In this article, we first present a brief introduction to the theory of derivations of operator algebras from both the physical and mathematical points of view. We then describe our recent work on derivations of Murray–von Neumann algebras. We show that the “extended derivations” of a Murray–von Neumann algebra, those that map the associated finite von Neumann algebra into itself, are inner. In particular, we prove that the only derivation that maps a Murray–von Neumann algebra associated with a factor of type II1 into that factor is 0. Those results are extensions of Singer’s seminal result answering a question of Kaplansky, as applied to von Neumann algebras: The algebra may be noncommutative and may even contain unbounded elements. PMID:24469831

  3. A double commutant theorem for Murray–von Neumann algebras

    PubMed Central

    Liu, Zhe

    2012-01-01

    Murray–von Neumann algebras are algebras of operators affiliated with finite von Neumann algebras. In this article, we study commutativity and affiliation of self-adjoint operators (possibly unbounded). We show that a maximal abelian self-adjoint subalgebra of the Murray–von Neumann algebra associated with a finite von Neumann algebra is the Murray–von Neumann algebra , where is a maximal abelian self-adjoint subalgebra of and, in addition, is . We also prove that the Murray–von Neumann algebra with the center of is the center of the Murray–von Neumann algebra . Von Neumann’s celebrated double commutant theorem characterizes von Neumann algebras as those for which , where , the commutant of , is the set of bounded operators on the Hilbert space that commute with all operators in . At the end of this article, we present a double commutant theorem for Murray–von Neumann algebras. PMID:22543165

  4. On the intersection of irreducible components of the space of finite-dimensional Lie algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbatsevich, Vladimir V

    2012-07-31

    The irreducible components of the space of n-dimensional Lie algebras are investigated. The properties of Lie algebras belonging to the intersection of all the irreducible components of this kind are studied (these Lie algebras are said to be basic or founding Lie algebras). It is proved that all Lie algebras of this kind are nilpotent and each of these Lie algebras has an Abelian ideal of codimension one. Specific examples of founding Lie algebras of arbitrary dimension are described and, to describe the Lie algebras in general, we state a conjecture. The concept of spectrum of a Lie algebra ismore » considered and some of the most elementary properties of the spectrum are studied. Bibliography: 6 titles.« less

  5. Quarks, Symmetries and Strings - a Symposium in Honor of Bunji Sakita's 60th Birthday

    NASA Astrophysics Data System (ADS)

    Kaku, M.; Jevicki, A.; Kikkawa, K.

    1991-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Evening Banquet Speech * I. Quarks and Phenomenology * From the SU(6) Model to Uniqueness in the Standard Model * A Model for Higgs Mechanism in the Standard Model * Quark Mass Generation in QCD * Neutrino Masses in the Standard Model * Solar Neutrino Puzzle, Horizontal Symmetry of Electroweak Interactions and Fermion Mass Hierarchies * State of Chiral Symmetry Breaking at High Temperatures * Approximate |ΔI| = 1/2 Rule from a Perspective of Light-Cone Frame Physics * Positronium (and Some Other Systems) in a Strong Magnetic Field * Bosonic Technicolor and the Flavor Problem * II. Strings * Supersymmetry in String Theory * Collective Field Theory and Schwinger-Dyson Equations in Matrix Models * Non-Perturbative String Theory * The Structure of Non-Perturbative Quantum Gravity in One and Two Dimensions * Noncritical Virasoro Algebra of d < 1 Matrix Model and Quantized String Field * Chaos in Matrix Models ? * On the Non-Commutative Symmetry of Quantum Gravity in Two Dimensions * Matrix Model Formulation of String Field Theory in One Dimension * Geometry of the N = 2 String Theory * Modular Invariance form Gauge Invariance in the Non-Polynomial String Field Theory * Stringy Symmetry and Off-Shell Ward Identities * q-Virasoro Algebra and q-Strings * Self-Tuning Fields and Resonant Correlations in 2d-Gravity * III. Field Theory Methods * Linear Momentum and Angular Momentum in Quaternionic Quantum Mechanics * Some Comments on Real Clifford Algebras * On the Quantum Group p-adics Connection * Gravitational Instantons Revisited * A Generalized BBGKY Hierarchy from the Classical Path-Integral * A Quantum Generated Symmetry: Group-Level Duality in Conformal and Topological Field Theory * Gauge Symmetries in Extended Objects * Hidden BRST Symmetry and Collective Coordinates * Towards Stochastically Quantizing Topological Actions * IV. Statistical Methods * A Brief Summary of the s-Channel Theory of Superconductivity * Neural Networks and Models for the Brain * Relativistic One-Body Equations for Planar Particles with Arbitrary Spin * Chiral Property of Quarks and Hadron Spectrum in Lattice QCD * Scalar Lattice QCD * Semi-Superconductivity of a Charged Anyon Gas * Two-Fermion Theory of Strongly Correlated Electrons and Charge-Spin Separation * Statistical Mechanics and Error-Correcting Codes * Quantum Statistics

  6. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  7. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  8. Conference on Stochastic Processes and their Applications (16th) Held in Stanford, California on 16-21 August 1987.

    DTIC Science & Technology

    1987-08-21

    property. 3.. 32’ " ~a-CHAOS " by-" Ron C. BMe ". University of Connecticut f.Storrs, CT l. 𔃾 ABSTRACT Although presented from two different vantage...either an abort or a restart fashion. *1 pal 58.- S~. , 2~ ./ ON CRITERIA OF OPTIMALITY IN ESTIMATION FOR STOCHASTIC PROCESSES by C. C. Heyde Australian

  9. Mathematical Sciences Division 1992 Programs

    DTIC Science & Technology

    1992-10-01

    statistical theory that underlies modern signal analysis . There is a strong emphasis on stochastic processes and time series , particularly those which...include optimal resource planning and real- time scheduling of stochastic shop-floor processes. Scheduling systems will be developed that can adapt to...make forecasts for the length-of-service time series . Protocol analysis of these sessions will be used to idenify relevant contextual features and to

  10. Discrete Dynamical Modeling.

    ERIC Educational Resources Information Center

    Sandefur, James T.

    1991-01-01

    Discussed is the process of translating situations involving changing quantities into mathematical relationships. This process, called dynamical modeling, allows students to learn new mathematics while sharpening their algebraic skills. A description of dynamical systems, problem-solving methods, a graphical analysis, and available classroom…

  11. Duncan F. Gregory, William Walton and the development of British algebra: 'algebraical geometry', 'geometrical algebra', abstraction.

    PubMed

    Verburgt, Lukas M

    2016-01-01

    This paper provides a detailed account of the period of the complex history of British algebra and geometry between the publication of George Peacock's Treatise on Algebra in 1830 and William Rowan Hamilton's paper on quaternions of 1843. During these years, Duncan Farquharson Gregory and William Walton published several contributions on 'algebraical geometry' and 'geometrical algebra' in the Cambridge Mathematical Journal. These contributions enabled them not only to generalize Peacock's symbolical algebra on the basis of geometrical considerations, but also to initiate the attempts to question the status of Euclidean space as the arbiter of valid geometrical interpretations. At the same time, Gregory and Walton were bound by the limits of symbolical algebra that they themselves made explicit; their work was not and could not be the 'abstract algebra' and 'abstract geometry' of figures such as Hamilton and Cayley. The central argument of the paper is that an understanding of the contributions to 'algebraical geometry' and 'geometrical algebra' of the second generation of 'scientific' symbolical algebraists is essential for a satisfactory explanation of the radical transition from symbolical to abstract algebra that took place in British mathematics in the 1830s-1840s.

  12. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  13. Power Laws in Stochastic Processes for Social Phenomena: An Introductory Review

    NASA Astrophysics Data System (ADS)

    Kumamoto, Shin-Ichiro; Kamihigashi, Takashi

    2018-03-01

    Many phenomena with power laws have been observed in various fields of the natural and social sciences, and these power laws are often interpreted as the macro behaviors of systems that consist of micro units. In this paper, we review some basic mathematical mechanisms that are known to generate power laws. In particular, we focus on stochastic processes including the Yule process and the Simon process as well as some recent models. The main purpose of this paper is to explain the mathematical details of their mechanisms in a self-contained manner.

  14. From simplicial Lie algebras and hypercrossed complexes to differential graded Lie algebras via 1-jets

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    2012-12-01

    Let g be a simplicial Lie algebra with Moore complex Ng of length k. Let G be the simplicial Lie group integrating g, such that each Gn is simply connected. We use the 1-jet of the classifying space W¯ G to construct, starting from g, a Lie k-algebra L. The so constructed Lie k-algebra L is actually a differential graded Lie algebra. The differential and the brackets are explicitly described in terms (of a part) of the corresponding k-hypercrossed complex structure of Ng. The result can be seen as a geometric interpretation of Quillen's (purely algebraic) construction of the adjunction between simplicial Lie algebras and dg-Lie algebras.

  15. Algebra: A Challenge at the Crossroads of Policy and Practice

    ERIC Educational Resources Information Center

    Stein, Mary Kay; Kaufman, Julia Heath; Sherman, Milan; Hillen, Amy F.

    2011-01-01

    The authors review what is known about early and universal algebra, including who is getting access to algebra and student outcomes associated with algebra course taking in general and specifically with universal algebra policies. The findings indicate that increasing numbers of students, some of whom are underprepared, are taking algebra earlier.…

  16. Making Algebra Work: Instructional Strategies that Deepen Student Understanding, within and between Algebraic Representations

    ERIC Educational Resources Information Center

    Star, Jon R.; Rittle-Johnson, Bethany

    2009-01-01

    Competence in algebra is increasingly recognized as a critical milestone in students' middle and high school years. The transition from arithmetic to algebra is a notoriously difficult one, and improvements in algebra instruction are greatly needed (National Research Council, 2001). Algebra historically has represented students' first sustained…

  17. Codifference as a practical tool to measure interdependence

    NASA Astrophysics Data System (ADS)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  18. Algebraic K-theory, K-regularity, and -duality of -stable C ∗-algebras

    NASA Astrophysics Data System (ADS)

    Mahanta, Snigdhayan

    2015-12-01

    We develop an algebraic formalism for topological -duality. More precisely, we show that topological -duality actually induces an isomorphism between noncommutative motives that in turn implements the well-known isomorphism between twisted K-theories (up to a shift). In order to establish this result we model topological K-theory by algebraic K-theory. We also construct an E ∞ -operad starting from any strongly self-absorbing C ∗-algebra . Then we show that there is a functorial topological K-theory symmetric spectrum construction on the category of separable C ∗-algebras, such that is an algebra over this operad; moreover, is a module over this algebra. Along the way we obtain a new symmetric spectra valued functorial model for the (connective) topological K-theory of C ∗-algebras. We also show that -stable C ∗-algebras are K-regular providing evidence for a conjecture of Rosenberg. We conclude with an explicit description of the algebraic K-theory of a x+ b-semigroup C ∗-algebras coming from number theory and that of -stabilized noncommutative tori.

  19. Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations

    PubMed Central

    2013-01-01

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328

  20. On time-dependent diffusion coefficients arising from stochastic processes with memory

    NASA Astrophysics Data System (ADS)

    Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.

    2017-08-01

    Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.

  1. Stochastic Analysis and Applied Probability(3.3.1): Topics in the Theory and Applications of Stochastic Analysis

    DTIC Science & Technology

    2015-08-13

    is due to Reiman [36] who considered the case where the arrivals and services are mutually independent renewal processes with square integrable summands...to a reflected diffusion process with drift and diffusion coefficients that depend on the state of the process. In models considered in works of Reiman ...the infinity Laplacian. Jour. AMS, to appear [36] M. I. Reiman . Open queueing networks in heavy traffic. Mathematics of Operations Research, 9(3): 441

  2. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  3. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  4. Asymptotic behavior of distributions of mRNA and protein levels in a model of stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Bobrowski, Adam; Lipniacki, Tomasz; Pichór, Katarzyna; Rudnicki, Ryszard

    2007-09-01

    The paper is devoted to a stochastic process introduced in the recent paper by Lipniacki et al. [T. Lipniacki, P. Paszek, A. Marciniak-Czochra, A.RE Brasier, M. Kimmel, Transcriptional stochasticity in gene expression, JE Theor. Biol. 238 (2006) 348-367] in modelling gene expression in eukaryotes. Starting from the full generator of the process we show that its distributions satisfy a (Fokker-Planck-type) system of partial differential equations. Then, we construct a c0 Markov semigroup in L1 space corresponding to this system. The main result of the paper is asymptotic stability of the involved semigroup in the set of densities.

  5. Constraints on Fluctuations in Sparsely Characterized Biological Systems.

    PubMed

    Hilfinger, Andreas; Norman, Thomas M; Vinnicombe, Glenn; Paulsson, Johan

    2016-02-05

    Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.

  6. Stochastic phase segregation on surfaces

    PubMed Central

    Gera, Prerna

    2017-01-01

    Phase separation and coarsening is a phenomenon commonly seen in binary physical and chemical systems that occur in nature. Often, thermal fluctuations, modelled as stochastic noise, are present in the system and the phase segregation process occurs on a surface. In this work, the segregation process is modelled via the Cahn–Hilliard–Cook model, which is a fourth-order parabolic stochastic system. Coarsening is analysed on two sample surfaces: a unit sphere and a dumbbell. On both surfaces, a statistical analysis of the growth rate is performed, and the influence of noise level and mobility is also investigated. For the spherical interface, it is also shown that a lognormal distribution fits the growth rate well. PMID:28878994

  7. Constraints on Fluctuations in Sparsely Characterized Biological Systems

    NASA Astrophysics Data System (ADS)

    Hilfinger, Andreas; Norman, Thomas M.; Vinnicombe, Glenn; Paulsson, Johan

    2016-02-01

    Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.

  8. Stochastic sensitivity analysis of the variability of dynamics and transition to chaos in the business cycles model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2018-01-01

    A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.

  9. 3D aquifer characterization using stochastic streamline calibration

    NASA Astrophysics Data System (ADS)

    Jang, Minchul

    2007-03-01

    In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.

  10. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com

    2014-03-15

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less

  11. Dual-scale topology optoelectronic processor.

    PubMed

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  12. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  13. Vortex algebra by multiply cascaded four-wave mixing of femtosecond optical beams.

    PubMed

    Hansinger, Peter; Maleshkov, Georgi; Garanovich, Ivan L; Skryabin, Dmitry V; Neshev, Dragomir N; Dreischuh, Alexander; Paulus, Gerhard G

    2014-05-05

    Experiments performed with different vortex pump beams show for the first time the algebra of the vortex topological charge cascade, that evolves in the process of nonlinear wave mixing of optical vortex beams in Kerr media due to competition of four-wave mixing with self-and cross-phase modulation. This leads to the coherent generation of complex singular beams within a spectral bandwidth larger than 200nm. Our experimental results are in good agreement with frequency-domain numerical calculations that describe the newly generated spectral satellites.

  14. Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Huang, Hua-Lin; Yang, Yuping

    2016-01-01

    By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.

  15. A Functional Central Limit Theorem for the Becker-Döring Model

    NASA Astrophysics Data System (ADS)

    Sun, Wen

    2018-04-01

    We investigate the fluctuations of the stochastic Becker-Döring model of polymerization when the initial size of the system converges to infinity. A functional central limit problem is proved for the vector of the number of polymers of a given size. It is shown that the stochastic process associated to fluctuations is converging to the strong solution of an infinite dimensional stochastic differential equation (SDE) in a Hilbert space. We also prove that, at equilibrium, the solution of this SDE is a Gaussian process. The proofs are based on a specific representation of the evolution equations, the introduction of a convenient Hilbert space and several technical estimates to control the fluctuations, especially of the first coordinate which interacts with all components of the infinite dimensional vector representing the state of the process.

  16. Predicting the process of extinction in experimental microcosms and accounting for interspecific interactions in single-species time series

    PubMed Central

    Ferguson, Jake M; Ponciano, José M

    2014-01-01

    Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. PMID:24304946

  17. Entropy production in mesoscopic stochastic thermodynamics: nonequilibrium kinetic cycles driven by chemical potentials, temperatures, and mechanical forces

    NASA Astrophysics Data System (ADS)

    Qian, Hong; Kjelstrup, Signe; Kolomeisky, Anatoly B.; Bedeaux, Dick

    2016-04-01

    Nonequilibrium thermodynamics (NET) investigates processes in systems out of global equilibrium. On a mesoscopic level, it provides a statistical dynamic description of various complex phenomena such as chemical reactions, ion transport, diffusion, thermochemical, thermomechanical and mechanochemical fluxes. In the present review, we introduce a mesoscopic stochastic formulation of NET by analyzing entropy production in several simple examples. The fundamental role of nonequilibrium steady-state cycle kinetics is emphasized. The statistical mechanics of Onsager’s reciprocal relations in this context is elucidated. Chemomechanical, thermomechanical, and enzyme-catalyzed thermochemical energy transduction processes are discussed. It is argued that mesoscopic stochastic NET in phase space provides a rigorous mathematical basis of fundamental concepts needed for understanding complex processes in chemistry, physics and biology. This theory is also relevant for nanoscale technological advances.

  18. Investigation of the stochastic nature of temperature and humidity for energy management

    NASA Astrophysics Data System (ADS)

    Hadjimitsis, Evanthis; Demetriou, Evangelos; Sakellari, Katerina; Tyralis, Hristos; Iliopoulou, Theano; Koutsoyiannis, Demetris

    2017-04-01

    Atmospheric temperature and dew point, in addition to their role in atmospheric processes, influence the management of energy systems since they highly affect the energy demand and production. Both temperature and humidity depend on the climate conditions and geographical location. In this context, we analyze numerous of observations around the globe and we investigate the long-term behaviour and periodicities of the temperature and humidity processes. Also, we present and apply a parsimonious stochastic double-cyclostationary model for these processes to an island in the Aegean Sea and investigate their link to energy management. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  19. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  20. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  1. Chemical event chain model of coupled genetic oscillators.

    PubMed

    Jörg, David J; Morelli, Luis G; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  2. Stochastic growth logistic model with aftereffect for batch fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah

    2014-06-19

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  3. Chemical event chain model of coupled genetic oscillators

    NASA Astrophysics Data System (ADS)

    Jörg, David J.; Morelli, Luis G.; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  4. Stochastic growth logistic model with aftereffect for batch fermentation process

    NASA Astrophysics Data System (ADS)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-06-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  5. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  6. Quantum Bio-Informatics IV

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori

    2011-01-01

    The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima.Use of cryptographic ideas to interpret biological phenomena (and vice versa) / M. Regoli -- Discrete approximation to operators in white noise analysis / Si Si -- Bogoliubov type equations via infinite-dimensional equations for measures / V. V. Kozlov and O. G. Smolyanov -- Analysis of several categorical data using measure of proportional reduction in variation / K. Yamamoto ... [et al.] -- The electron reservoir hypothesis for two-dimensional electron systems / K. Yamada ... [et al.] -- On the correspondence between Newtonian and functional mechanics / E. V. Piskovskiy and I. V. Volovich -- Quantile-quantile plots: An approach for the inter-species comparison of promoter architecture in eukaryotes / K. Feldmeier ... [et al.] -- Entropy type complexities in quantum dynamical processes / N. Watanabe -- A fair sampling test for Ekert protocol / G. Adenier, A. Yu. Khrennikov and N. Watanabe -- Brownian dynamics simulation of macromolecule diffusion in a protocell / T. Ando and J. Skolnick -- Signaling network of environmental sensing and adaptation in plants: Key roles of calcium ion / K. Kuchitsu and T. Kurusu -- NetzCope: A tool for displaying and analyzing complex networks / M. J. Barber, L. Streit and O. Strogan -- Study of HIV-1 evolution by coding theory and entropic chaos degree / K. Sato -- The prediction of botulinum toxin structure based on in silico and in vitro analysis / T. Suzuki and S. Miyazaki -- On the mechanism of D-wave high T[symbol] superconductivity by the interplay of Jahn-Teller physics and Mott physics / H. Ushio, S. Matsuno and H. Kamimura.

  7. Slow-fast stochastic diffusion dynamics and quasi-stationarity for diploid populations with varying size.

    PubMed

    Coron, Camille

    2016-01-01

    We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.

  8. Introduction to Focus Issue: nonlinear and stochastic physics in biology.

    PubMed

    Bahar, Sonya; Neiman, Alexander B; Jung, Peter; Kurths, Jürgen; Schimansky-Geier, Lutz; Showalter, Kenneth

    2011-12-01

    Frank Moss was a leading figure in the study of nonlinear and stochastic processes in biological systems. His work, particularly in the area of stochastic resonance, has been highly influential to the interdisciplinary scientific community. This Focus Issue pays tribute to Moss with articles that describe the most recent advances in the field he helped to create. In this Introduction, we review Moss's seminal scientific contributions and introduce the articles that make up this Focus Issue.

  9. Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.

    PubMed

    Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai

    2006-09-08

    Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

  10. CERENA: ChEmical REaction Network Analyzer--A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics.

    PubMed

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/.

  11. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  12. CERENA: ChEmical REaction Network Analyzer—A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics

    PubMed Central

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J.; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/. PMID:26807911

  13. Conference on Stochastic Processes and their Applications (16th) Held in Stanford, California on August 17-21, 1987.

    DTIC Science & Technology

    1987-08-01

    ESTIMATION FOR STOCHASTIC PROCESSES by C. C. Heyde Australian National University Canberra, Australia ABSTRACT Optimality is a widely and loosely used...Case 240 S. Australia 1211 Geneva 24 Switzerland Christopher C. Heyde Dept. of Statistics, IAS Patricia Jacobs . Australian National University...Universitat Regensburg USA Postfach D-8400 Regensburg Anatole Joffe W. Germany Dept. of Mathematics & Statatistics Frank Kelly Universite de Montreal

  14. Simplified model of statistically stationary spacecraft rotation and associated induced gravity environments

    NASA Technical Reports Server (NTRS)

    Fichtl, G. H.; Holland, R. L.

    1978-01-01

    A stochastic model of spacecraft motion was developed based on the assumption that the net torque vector due to crew activity and rocket thruster firings is a statistically stationary Gaussian vector process. The process had zero ensemble mean value, and the components of the torque vector were mutually stochastically independent. The linearized rigid-body equations of motion were used to derive the autospectral density functions of the components of the spacecraft rotation vector. The cross-spectral density functions of the components of the rotation vector vanish for all frequencies so that the components of rotation were mutually stochastically independent. The autospectral and cross-spectral density functions of the induced gravity environment imparted to scientific apparatus rigidly attached to the spacecraft were calculated from the rotation rate spectral density functions via linearized inertial frame to body-fixed principal axis frame transformation formulae. The induced gravity process was a Gaussian one with zero mean value. Transformation formulae were used to rotate the principal axis body-fixed frame to which the rotation rate and induced gravity vector were referred to a body-fixed frame in which the components of the induced gravity vector were stochastically independent. Rice's theory of exceedances was used to calculate expected exceedance rates of the components of the rotation and induced gravity vector processes.

  15. A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.

    PubMed

    Mostafa, Hesham; Cauwenberghs, Gert

    2018-06-01

    Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.

  16. Optimization under variability and uncertainty: a case study for NOx emissions control for a gasification system.

    PubMed

    Chen, Jianjun; Frey, H Christopher

    2004-12-15

    Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.

  17. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 1: theoretical development

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.

  18. Dynamical systems defined on infinite dimensional lie algebras of the ''current algebra'' or ''Kac-Moody'' type

    NASA Astrophysics Data System (ADS)

    Hermann, Robert

    1982-07-01

    Recent work by Morrison, Marsden, and Weinstein has drawn attention to the possibility of utilizing the cosymplectic structure of the dual of the Lie algebra of certain infinite dimensional Lie groups to study hydrodynamical and plasma systems. This paper treats certain models arising in elementary particle physics, considered by Lee, Weinberg, and Zumino; Sugawara; Bardacki, Halpern, and Frishman; Hermann; and Dolan. The lie algebras involved are associated with the ''current algebras'' of Gell-Mann. This class of Lie algebras contains certain of the algebras that are called ''Kac-Moody algebras'' in the recent mathematics and mathematical physics literature.

  19. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  20. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

Top