Sample records for simple stochastic process

  1. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  2. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  3. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  4. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  5. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  6. Simulations of Technology-Induced and Crisis-Led Stochastic and Chaotic Fluctuations in Higher Education Processes: A Model and a Case Study for Performance and Expected Employment

    ERIC Educational Resources Information Center

    Ahmet, Kara

    2015-01-01

    This paper presents a simple model of the provision of higher educational services that considers and exemplifies nonlinear, stochastic, and potentially chaotic processes. I use the methods of system dynamics to simulate these processes in the context of a particular sociologically interesting case, namely that of the Turkish higher education…

  7. Data-driven monitoring for stochastic systems and its application on batch process

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Ding, Steven X.; Haghani Abandan Sari, Adel; Hao, Haiyang

    2013-07-01

    Batch processes are characterised by a prescribed processing of raw materials into final products for a finite duration and play an important role in many industrial sectors due to the low-volume and high-value products. Process dynamics and stochastic disturbances are inherent characteristics of batch processes, which cause monitoring of batch processes a challenging problem in practice. To solve this problem, a subspace-aided data-driven approach is presented in this article for batch process monitoring. The advantages of the proposed approach lie in its simple form and its abilities to deal with stochastic disturbances and process dynamics existing in the process. The kernel density estimation, which serves as a non-parametric way of estimating the probability density function, is utilised for threshold calculation. An industrial benchmark of fed-batch penicillin production is finally utilised to verify the effectiveness of the proposed approach.

  8. Analytical approximations for spatial stochastic gene expression in single cells and tissues

    PubMed Central

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-01-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  9. Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Park, Brian T.; Petrosian, Vahe

    1996-03-01

    Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.

  10. Peer pressure and Generalised Lotka Volterra models

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Sabatelli, Lorenzo

    2004-12-01

    We develop a novel approach to peer pressure and Generalised Lotka-Volterra (GLV) models that builds on the development of a simple Langevin equation that characterises stochastic processes. We generalise the approach to stochastic equations that model interacting agents. The agent models recently advocated by Marsilli and Solomon are motivated. Using a simple change of variable, we show that the peer pressure model (similar to the one introduced by Marsilli) and the wealth dynamics model of Solomon may be (almost) mapped one into the other. This may help shed light in the (apparently) different wealth dynamics described by GLV and the Marsili-like peer pressure models.

  11. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part II Scholastic Models. Part II, Chapter 4.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…

  12. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  13. Dynamic system classifier.

    PubMed

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  14. Entropy production in mesoscopic stochastic thermodynamics: nonequilibrium kinetic cycles driven by chemical potentials, temperatures, and mechanical forces

    NASA Astrophysics Data System (ADS)

    Qian, Hong; Kjelstrup, Signe; Kolomeisky, Anatoly B.; Bedeaux, Dick

    2016-04-01

    Nonequilibrium thermodynamics (NET) investigates processes in systems out of global equilibrium. On a mesoscopic level, it provides a statistical dynamic description of various complex phenomena such as chemical reactions, ion transport, diffusion, thermochemical, thermomechanical and mechanochemical fluxes. In the present review, we introduce a mesoscopic stochastic formulation of NET by analyzing entropy production in several simple examples. The fundamental role of nonequilibrium steady-state cycle kinetics is emphasized. The statistical mechanics of Onsager’s reciprocal relations in this context is elucidated. Chemomechanical, thermomechanical, and enzyme-catalyzed thermochemical energy transduction processes are discussed. It is argued that mesoscopic stochastic NET in phase space provides a rigorous mathematical basis of fundamental concepts needed for understanding complex processes in chemistry, physics and biology. This theory is also relevant for nanoscale technological advances.

  15. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  16. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  17. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  18. Unidirectional random growth with resetting

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Néda, Z.

    2018-06-01

    We review stochastic processes without detailed balance condition and derive their H-theorem. We obtain stationary distributions and investigate their stability in terms of generalized entropic distances beyond the Kullback-Leibler formula. A simple stochastic model with local growth rates and direct resetting to the ground state is investigated and applied to various networks, scientific citations and Facebook popularity, hadronic yields in high energy particle reactions, income and wealth distributions, biodiversity and settlement size distributions.

  19. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  20. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  1. Stochastic multi-scale models of competition within heterogeneous cellular populations: Simulation methods and mean-field analysis.

    PubMed

    Cruz, Roberto de la; Guerrero, Pilar; Spill, Fabian; Alarcón, Tomás

    2016-10-21

    We propose a modelling framework to analyse the stochastic behaviour of heterogeneous, multi-scale cellular populations. We illustrate our methodology with a particular example in which we study a population with an oxygen-regulated proliferation rate. Our formulation is based on an age-dependent stochastic process. Cells within the population are characterised by their age (i.e. time elapsed since they were born). The age-dependent (oxygen-regulated) birth rate is given by a stochastic model of oxygen-dependent cell cycle progression. Once the birth rate is determined, we formulate an age-dependent birth-and-death process, which dictates the time evolution of the cell population. The population is under a feedback loop which controls its steady state size (carrying capacity): cells consume oxygen which in turn fuels cell proliferation. We show that our stochastic model of cell cycle progression allows for heterogeneity within the cell population induced by stochastic effects. Such heterogeneous behaviour is reflected in variations in the proliferation rate. Within this set-up, we have established three main results. First, we have shown that the age to the G1/S transition, which essentially determines the birth rate, exhibits a remarkably simple scaling behaviour. Besides the fact that this simple behaviour emerges from a rather complex model, this allows for a huge simplification of our numerical methodology. A further result is the observation that heterogeneous populations undergo an internal process of quasi-neutral competition. Finally, we investigated the effects of cell-cycle-phase dependent therapies (such as radiation therapy) on heterogeneous populations. In particular, we have studied the case in which the population contains a quiescent sub-population. Our mean-field analysis and numerical simulations confirm that, if the survival fraction of the therapy is too high, rescue of the quiescent population occurs. This gives rise to emergence of resistance to therapy since the rescued population is less sensitive to therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Analytical Assessment for Transient Stability Under Stochastic Continuous Disturbances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, Ping; Li, Hongyu; Gan, Chun

    Here, with the growing integration of renewable power generation, plug-in electric vehicles, and other sources of uncertainty, increasing stochastic continuous disturbances are brought to power systems. The impact of stochastic continuous disturbances on power system transient stability attracts significant attention. To address this problem, this paper proposes an analytical assessment method for transient stability of multi-machine power systems under stochastic continuous disturbances. In the proposed method, a probability measure of transient stability is presented and analytically solved by stochastic averaging. Compared with the conventional method (Monte Carlo simulation), the proposed method is many orders of magnitude faster, which makes itmore » very attractive in practice when many plans for transient stability must be compared or when transient stability must be analyzed quickly. Also, it is found that the evolution of system energy over time is almost a simple diffusion process by the proposed method, which explains the impact mechanism of stochastic continuous disturbances on transient stability in theory.« less

  3. Degree Distribution of Position-Dependent Ball-Passing Networks in Football Games

    NASA Astrophysics Data System (ADS)

    Narizuka, Takuma; Yamamoto, Ken; Yamazaki, Yoshihiro

    2015-08-01

    We propose a simple stochastic model describing the position-dependent ball-passing network in football (soccer) games. In this network, a player in a certain area in a divided field is a node, and a pass between two nodes corresponds to an edge. Our stochastic process model is characterized by the consecutive choice of a node depending on its intrinsic fitness. We derive an explicit expression for the degree distribution and find that the derived distribution reproduces that for actual data reasonably well.

  4. A Nondeterministic Resource Planning Model in Education

    ERIC Educational Resources Information Center

    Yoda, Koji

    1977-01-01

    Discusses a simple technique for stochastic resource planning that, when computerized, can assist educational managers in the process of quantifying the future uncertainty, thereby, helping them make better decisions. The example used is a school lunch program. (Author/IRT)

  5. Stochastic theory of log-periodic patterns

    NASA Astrophysics Data System (ADS)

    Canessa, Enrique

    2000-12-01

    We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.

  6. Simple stochastic model for El Niño with westerly wind bursts

    PubMed Central

    Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.

    2016-01-01

    Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821

  7. Modified parton branching model for multi-particle production in hadronic collisions: Application to SUSY particle branching

    NASA Astrophysics Data System (ADS)

    Yuanyuan, Zhang

    The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.

  8. Double simple-harmonic-oscillator formulation of the thermal equilibrium of a fluid interacting with a coherent source of phonons

    NASA Technical Reports Server (NTRS)

    Defacio, B.; Vannevel, Alan; Brander, O.

    1993-01-01

    A formulation is given for a collection of phonons (sound) in a fluid at a non-zero temperature which uses the simple harmonic oscillator twice; one to give a stochastic thermal 'noise' process and the other which generates a coherent Glauber state of phonons. Simple thermodynamic observables are calculated and the acoustic two point function, 'contrast' is presented. The role of 'coherence' in an equilibrium system is clarified by these results and the simple harmonic oscillator is a key structure in both the formulation and the calculations.

  9. A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.

    PubMed

    Mostafa, Hesham; Cauwenberghs, Gert

    2018-06-01

    Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.

  10. Algorithm refinement for stochastic partial differential equations: II. Correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2005-08-10

    We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less

  11. Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2002-05-01

    We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.

  12. Beamlets from stochastic acceleration

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  13. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  14. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  15. Research on Process Models of Basic Arithmetic Skills, Technical Report No. 303. Psychology and Education Series - Final Report.

    ERIC Educational Resources Information Center

    Suppes, Patrick; And Others

    This report presents a theory of eye movement that accounts for main features of the stochastic behavior of eye-fixation durations and direction of movement of saccades in the process of solving arithmetic exercises of addition and subtraction. The best-fitting distribution of fixation durations with a relatively simple theoretical justification…

  16. Harmony Theory: A Mathematical Framework for Stochastic Parallel Processing.

    ERIC Educational Resources Information Center

    Smolensky, Paul

    This paper presents preliminary results of research founded on the hypothesis that in real environments there exist regularities that can be idealized as mathematical structures that are simple enough to be analyzed. The author considered three steps in analyzing the encoding of modularity of the environment. First, a general information…

  17. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  18. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    PubMed

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  19. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  20. A simple stochastic weather generator for ecological modeling

    Treesearch

    A.G. Birt; M.R. Valdez-Vivas; R.M. Feldman; C.W. Lafon; D. Cairns; R.N. Coulson; M. Tchakerian; W. Xi; Jim Guldin

    2010-01-01

    Stochastic weather generators are useful tools for exploring the relationship between organisms and their environment. This paper describes a simple weather generator that can be used in ecological modeling projects. We provide a detailed description of methodology, and links to full C++ source code (http://weathergen.sourceforge.net) required to implement or modify...

  1. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  2. Toward the Darwinian transition: Switching between distributed and speciated states in a simple model of early life.

    PubMed

    Arnoldt, Hinrich; Strogatz, Steven H; Timme, Marc

    2015-01-01

    It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for stochastic processes potentially contributing to this hypothesized "Darwinian transition." The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and essentially contributed to the emergence of vertical descent and the first well-defined species in early evolution. A systematic nonlinear analysis of the stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable direction out of early collective evolution, potentially enabling the start of individuality and vertical Darwinian evolution.

  3. Computing diffusivities from particle models out of equilibrium

    NASA Astrophysics Data System (ADS)

    Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia

    2018-04-01

    A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.

  4. Stochastic noncooperative and cooperative evolutionary game strategies of a population of biological networks under natural selection.

    PubMed

    Chen, Bor-Sen; Yeh, Chin-Hsun

    2017-12-01

    We review current static and dynamic evolutionary game strategies of biological networks and discuss the lack of random genetic variations and stochastic environmental disturbances in these models. To include these factors, a population of evolving biological networks is modeled as a nonlinear stochastic biological system with Poisson-driven genetic variations and random environmental fluctuations (stimuli). To gain insight into the evolutionary game theory of stochastic biological networks under natural selection, the phenotypic robustness and network evolvability of noncooperative and cooperative evolutionary game strategies are discussed from a stochastic Nash game perspective. The noncooperative strategy can be transformed into an equivalent multi-objective optimization problem and is shown to display significantly improved network robustness to tolerate genetic variations and buffer environmental disturbances, maintaining phenotypic traits for longer than the cooperative strategy. However, the noncooperative case requires greater effort and more compromises between partly conflicting players. Global linearization is used to simplify the problem of solving nonlinear stochastic evolutionary games. Finally, a simple stochastic evolutionary model of a metabolic pathway is simulated to illustrate the procedure of solving for two evolutionary game strategies and to confirm and compare their respective characteristics in the evolutionary process. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  6. Community assembly of the worm gut microbiome

    NASA Astrophysics Data System (ADS)

    Gore, Jeff

    It has become increasingly clear that human health is strongly influenced by the bacteria that live within the gut, known collectively as the gut microbiome. This complex community varies tremendously between individuals, but understanding the sources that lead to this heterogeneity is challenging. To address this challenge, we are using a bottom-up approach to develop a predictive understanding of how the microbiome assembles and functions within a simple and experimentally tractable gut, the gut of the worm C. elegans. We have found that stochastic community assembly in the C. elegansintestine is sufficient to produce strong inter-worm heterogeneity in community composition. When worms are fed with two neutrally-competing fluorescently labeled bacterial strains, we observe stochastically-driven bimodality in community composition, where approximately half of the worms are dominated by each bacterial strain. A simple model incorporating stochastic colonization suggests that heterogeneity between worms is driven by the low rate at which bacteria successfully establish new intestinal colonies. We can increase this rate experimentally by feeding worms at high bacterial density; in these conditions the bimodality disappears. We have also characterized all pairwise interspecies competitions among a set of eleven bacterial species, illuminating the rules governing interspecies community assembly. These results demonstrate the potential importance of stochastic processes in bacterial community formation and suggest a role for C. elegans as a model system for ecology of host-associated communities.

  7. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009).

    PubMed

    Nishiura, Hiroshi

    2011-02-16

    Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.

  8. A model of gene expression based on random dynamical systems reveals modularity properties of gene regulatory networks.

    PubMed

    Antoneli, Fernando; Ferreira, Renata C; Briones, Marcelo R S

    2016-06-01

    Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit). It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems. Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model. We illustrate our framework with some simple examples of single-gene system and network motifs. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  10. A stochastic process approach of the drake equation parameters

    NASA Astrophysics Data System (ADS)

    Glade, Nicolas; Ballet, Pascal; Bastien, Olivier

    2012-04-01

    The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.

  11. On the Radio-emitting Particles of the Crab Nebula: Stochastic Acceleration Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, Shuta J.; Asano, Katsuaki, E-mail: sjtanaka@center.konan-u.ac.jp

    The broadband emission of pulsar wind nebulae (PWNe) is well described by non-thermal emissions from accelerated electrons and positrons. However, the standard shock acceleration model of PWNe does not account for the hard spectrum in radio wavelengths. The origin of the radio-emitting particles is also important to determine the pair production efficiency in the pulsar magnetosphere. Here, we propose a possible resolution for the particle energy distribution in PWNe; the radio-emitting particles are not accelerated at the pulsar wind termination shock but are stochastically accelerated by turbulence inside PWNe. We upgrade our past one-zone spectral evolution model to include themore » energy diffusion, i.e., the stochastic acceleration, and apply the model to the Crab Nebula. A fairly simple form of the energy diffusion coefficient is assumed for this demonstrative study. For a particle injection to the stochastic acceleration process, we consider the continuous injection from the supernova ejecta or the impulsive injection associated with supernova explosion. The observed broadband spectrum and the decay of the radio flux are reproduced by tuning the amount of the particle injected to the stochastic acceleration process. The acceleration timescale and the duration of the acceleration are required to be a few decades and a few hundred years, respectively. Our results imply that some unveiled mechanisms, such as back reaction to the turbulence, are required to make the energies of stochastically and shock-accelerated particles comparable.« less

  12. STOCHASTIC DUELS--II

    DTIC Science & Technology

    of his time to fire a single round. The solution of the simple duel in the case where each protagonist’s time-to-kill is distributed as a gamma-variate...general simple duel . An expansion of the moment-generating function of the marksman’s time-to- kill in powers of his kill probability is next derived and...found to provide a good approximation to the solution of the simple duel ; various properties of the expansion are also considered. A stochastic battle

  13. Introducing Stochastic Simulation of Chemical Reactions Using the Gillespie Algorithm and MATLAB: Revisited and Augmented

    ERIC Educational Resources Information Center

    Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.

    2008-01-01

    The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…

  14. Simple stochastic birth and death models of genome evolution: was there enough time for us to evolve?

    PubMed

    Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V

    2003-10-12

    The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.

  15. Evolutionary fields can explain patterns of high-dimensional complexity in ecology

    NASA Astrophysics Data System (ADS)

    Wilsenach, James; Landi, Pietro; Hui, Cang

    2017-04-01

    One of the properties that make ecological systems so unique is the range of complex behavioral patterns that can be exhibited by even the simplest communities with only a few species. Much of this complexity is commonly attributed to stochastic factors that have very high-degrees of freedom. Orthodox study of the evolution of these simple networks has generally been limited in its ability to explain complexity, since it restricts evolutionary adaptation to an inertia-free process with few degrees of freedom in which only gradual, moderately complex behaviors are possible. We propose a model inspired by particle-mediated field phenomena in classical physics in combination with fundamental concepts in adaptation, which suggests that small but high-dimensional chaotic dynamics near to the adaptive trait optimum could help explain complex properties shared by most ecological datasets, such as aperiodicity and pink, fractal noise spectra. By examining a simple predator-prey model and appealing to real ecological data, we show that this type of complexity could be easily confused for or confounded by stochasticity, especially when spurred on or amplified by stochastic factors that share variational and spectral properties with the underlying dynamics.

  16. Insight into nuclear body formation of phytochromes through stochastic modelling and experiment.

    PubMed

    Grima, Ramon; Sonntag, Sebastian; Venezia, Filippo; Kircher, Stefan; Smith, Robert W; Fleck, Christian

    2018-05-01

    Spatial relocalization of proteins is crucial for the correct functioning of living cells. An interesting example of spatial ordering is the light-induced clustering of plant photoreceptor proteins. Upon irradiation by white or red light, the red light-active phytochrome, phytochrome B, enters the nucleus and accumulates in large nuclear bodies. The underlying physical process of nuclear body formation remains unclear, but phytochrome B is thought to coagulate via a simple protein-protein binding process. We measure, for the first time, the distribution of the number of phytochrome B-containing nuclear bodies as well as their volume distribution. We show that the experimental data cannot be explained by a stochastic model of nuclear body formation via simple protein-protein binding processes using physically meaningful parameter values. Rather modelling suggests that the data is consistent with a two step process: a fast nucleation step leading to macroparticles followed by a subsequent slow step in which the macroparticles bind to form the nuclear body. An alternative explanation for the observed nuclear body distribution is that the phytochromes bind to a so far unknown molecular structure. We believe it is likely this result holds more generally for other nuclear body-forming plant photoreceptors and proteins. Creative Commons Attribution license.

  17. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    PubMed Central

    2018-01-01

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site. PMID:29386401

  18. Models of Individual Trajectories in Computer-Assisted Instruction for Deaf Students. Technical Report No. 214.

    ERIC Educational Resources Information Center

    Suppes, P.; And Others

    From some simple and schematic assumptions about information processing, a stochastic differential equation is derived for the motion of a student through a computer-assisted elementary mathematics curriculum. The mathematics strands curriculum of the Institute for Mathematical Studies in the Social Sciences is used to test: (1) the theory and (2)…

  19. Dimension reduction for stochastic dynamical systems forced onto a manifold by large drift: a constructive approach with examples from theoretical biology

    NASA Astrophysics Data System (ADS)

    Parsons, Todd L.; Rogers, Tim

    2017-10-01

    Systems composed of large numbers of interacting agents often admit an effective coarse-grained description in terms of a multidimensional stochastic dynamical system, driven by small-amplitude intrinsic noise. In applications to biological, ecological, chemical and social dynamics it is common for these models to posses quantities that are approximately conserved on short timescales, in which case system trajectories are observed to remain close to some lower-dimensional subspace. Here, we derive explicit and general formulae for a reduced-dimension description of such processes that is exact in the limit of small noise and well-separated slow and fast dynamics. The Michaelis-Menten law of enzyme-catalysed reactions, and the link between the Lotka-Volterra and Wright-Fisher processes are explored as a simple worked examples. Extensions of the method are presented for infinite dimensional systems and processes coupled to non-Gaussian noise sources.

  20. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009)

    PubMed Central

    2011-01-01

    Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153

  1. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  2. Stochastic effects in EUV lithography: random, local CD variability, and printing failures

    NASA Astrophysics Data System (ADS)

    De Bisschop, Peter

    2017-10-01

    Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.

  3. The Heterogeneous Investment Horizon and Dynamic Strategies for Asset Allocation

    NASA Astrophysics Data System (ADS)

    Xiong, Heping; Xu, Yiheng; Xiao, Yi

    This paper discusses the influence of the portfolio rebalancing strategy on the efficiency of long-term investment portfolios under the assumption of independent stationary distribution of returns. By comparing the efficient sets of the stochastic rebalancing strategy, the simple rebalancing strategy and the buy-and-hold strategy with specific data examples, we find that the stochastic rebalancing strategy is optimal, while the simple rebalancing strategy is of the lowest efficiency. In addition, the simple rebalancing strategy lowers the efficiency of the portfolio instead of improving it.

  4. Area of Stochastic Scrape-Off Layer for a Single-Null Divertor Tokamak Using Simple Map

    NASA Astrophysics Data System (ADS)

    Fisher, Tiffany; Verma, Arun; Punjabi, Alkesh

    1996-11-01

    The magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). The Simple map is characterized by a single parameter k representing the toroidal asymmetry. The width of the stochastic scrape-off layer and its area varies with the map parameter k. We calculate the area of the stochastic scrape-off layer for different k's and obtain a parametric expression for the area in terms of k and y _LastGoodSurface(k). This work is supported by US DOE OFES. Tiffany Fisher is a HU CFRT Summer Fusion High school Workshop Scholar from New Bern High School in North Carolina. She is supported by NASA SHARP Plus Program.

  5. Simple dynamical models capturing the key features of the Central Pacific El Niño.

    PubMed

    Chen, Nan; Majda, Andrew J

    2016-10-18

    The Central Pacific El Niño (CP El Niño) has been frequently observed in recent decades. The phenomenon is characterized by an anomalous warm sea surface temperature (SST) confined to the central Pacific and has different teleconnections from the traditional El Niño. Here, simple models are developed and shown to capture the key mechanisms of the CP El Niño. The starting model involves coupled atmosphere-ocean processes that are deterministic, linear, and stable. Then, systematic strategies are developed for incorporating several major mechanisms of the CP El Niño into the coupled system. First, simple nonlinear zonal advection with no ad hoc parameterization of the background SST gradient is introduced that creates coupled nonlinear advective modes of the SST. Secondly, due to the recent multidecadal strengthening of the easterly trade wind, a stochastic parameterization of the wind bursts including a mean easterly trade wind anomaly is coupled to the simple atmosphere-ocean processes. Effective stochastic noise in the wind burst model facilitates the intermittent occurrence of the CP El Niño with realistic amplitude and duration. In addition to the anomalous warm SST in the central Pacific, other major features of the CP El Niño such as the rising branch of the anomalous Walker circulation being shifted to the central Pacific and the eastern Pacific cooling with a shallow thermocline are all captured by this simple coupled model. Importantly, the coupled model succeeds in simulating a series of CP El Niño that lasts for 5 y, which resembles the two CP El Niño episodes during 1990-1995 and 2002-2006.

  6. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    PubMed

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.

  7. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yen Ting; Buchler, Nicolas E.

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  8. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE PAGES

    Lin, Yen Ting; Buchler, Nicolas E.

    2018-01-31

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  9. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  10. Simple Map with Low MN Perturbation for a Single-Null Divertor Tokamak with Constant Width of Stochastic Layer

    NASA Astrophysics Data System (ADS)

    Verma, Arun; Smith, Terry; Punjabi, Alkesh; Boozer, Allen

    1996-11-01

    In this work, we investigate the effects of low MN perturbations in a single-null divertor tokamak with stochastic scrape-off layer. The unperturbed magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). We choose the combinations of the map parameter k, and the strength of the low MN perturbation such that the width of stochastic layer remains unchanged. We give detailed results on the effects of low MN perturbation on the magnetic topology of the stochastic layer and on the footprint of field lines on the divertor plate given the constraint of constant width of the stochastic layer. The low MN perturbations occur naturally and therefore their effects are of considerable importance in tokamak divertor physics. This work is supported by US DOE OFES. Use of CRAY at HU and at NERSC is gratefully acknowledged.

  11. Random walk, diffusion and mixing in simulations of scalar transport in fluid flows

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2008-12-01

    Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.

  12. Interplay of Determinism and Randomness: From Irreversibility to Chaos, Fractals, and Stochasticity

    NASA Astrophysics Data System (ADS)

    Tsonis, A.

    2017-12-01

    We will start our discussion into randomness by looking exclusively at our formal mathematical system to show that even in this pure and strictly logical system one cannot do away with randomness. By employing simple mathematical models, we will identify the three possible sources of randomness: randomness due to inability to find the rules (irreversibility), randomness due to inability to have infinite power (chaos), and randomness due to stochastic processes. Subsequently we will move from the mathematical system to our physical world to show that randomness, through the quantum mechanical character of small scales, through chaos, and because of the second law of thermodynamics, is an intrinsic property of nature as well. We will subsequently argue that the randomness in the physical world is consistent with the three sources of randomness suggested from the study of simple mathematical systems. Many examples ranging from purely mathematical to natural processes will be presented, which clearly demonstrate how the combination of rules and randomness produces the world we live in. Finally, the principle of least effort or the principle of minimum energy consumption will be suggested as the underlying principle behind this symbiosis between determinism and randomness.

  13. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  14. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems.

    PubMed

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  15. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems

    NASA Astrophysics Data System (ADS)

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  16. Quantum Optics Models of EIT Noise and Power Broadening

    NASA Astrophysics Data System (ADS)

    Snider, Chad; Crescimanno, Michael; O'Leary, Shannon

    2011-04-01

    When two coherent beams of light interact with an atom they tend to drive the atom to a non-absorbing state through a process called Electromagnetically Induced Transparency (EIT). If the light's frequency dithers, the atom's state stochastically moves in and out of this non-absorbing state. We describe a simple quantum optics model of this process that captures the essential experimentally observed statistical features of this EIT noise, with a particular emphasis on understanding power broadening.

  17. A statistical approach to quasi-extinction forecasting.

    PubMed

    Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric

    2007-12-01

    Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.

  18. Physical Models of Cognition

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1994-01-01

    This paper presents and discusses physical models for simulating some aspects of neural intelligence, and, in particular, the process of cognition. The main departure from the classical approach here is in utilization of a terminal version of classical dynamics introduced by the author earlier. Based upon violations of the Lipschitz condition at equilibrium points, terminal dynamics attains two new fundamental properties: it is spontaneous and nondeterministic. Special attention is focused on terminal neurodynamics as a particular architecture of terminal dynamics which is suitable for modeling of information flows. Terminal neurodynamics possesses a well-organized probabilistic structure which can be analytically predicted, prescribed, and controlled, and therefore which presents a powerful tool for modeling real-life uncertainties. Two basic phenomena associated with random behavior of neurodynamic solutions are exploited. The first one is a stochastic attractor ; a stable stationary stochastic process to which random solutions of a closed system converge. As a model of the cognition process, a stochastic attractor can be viewed as a universal tool for generalization and formation of classes of patterns. The concept of stochastic attractor is applied to model a collective brain paradigm explaining coordination between simple units of intelligence which perform a collective task without direct exchange of information. The second fundamental phenomenon discussed is terminal chaos which occurs in open systems. Applications of terminal chaos to information fusion as well as to explanation and modeling of coordination among neurons in biological systems are discussed. It should be emphasized that all the models of terminal neurodynamics are implementable in analog devices, which means that all the cognition processes discussed in the paper are reducible to the laws of Newtonian mechanics.

  19. The Black-Scholes option pricing problem in mathematical finance: generalization and extensions for a large class of stochastic processes

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Sornette, Didier

    1994-06-01

    The ability to price risks and devise optimal investment strategies in thé présence of an uncertain "random" market is thé cornerstone of modern finance theory. We first consider thé simplest such problem of a so-called "European call option" initially solved by Black and Scholes using Ito stochastic calculus for markets modelled by a log-Brownien stochastic process. A simple and powerful formalism is presented which allows us to generalize thé analysis to a large class of stochastic processes, such as ARCH, jump or Lévy processes. We also address thé case of correlated Gaussian processes, which is shown to be a good description of three différent market indices (MATIF, CAC40, FTSE100). Our main result is thé introduction of thé concept of an optimal strategy in the sense of (functional) minimization of the risk with respect to the portfolio. If the risk may be made to vanish for particular continuous uncorrelated 'quasiGaussian' stochastic processes (including Black and Scholes model), this is no longer the case for more general stochastic processes. The value of the residual risk is obtained and suggests the concept of risk-corrected option prices. In the presence of very large deviations such as in Lévy processes, new criteria for rational fixing of the option prices are discussed. We also apply our method to other types of options, `Asian', `American', and discuss new possibilities (`doubledecker'...). The inclusion of transaction costs leads to the appearance of a natural characteristic trading time scale. L'aptitude à quantifier le coût du risque et à définir une stratégie optimale de gestion de portefeuille dans un marché aléatoire constitue la base de la théorie moderne de la finance. Nous considérons d'abord le problème le plus simple de ce type, à savoir celui de l'option d'achat `européenne', qui a été résolu par Black et Scholes à l'aide du calcul stochastique d'Ito appliqué aux marchés modélisés par un processus Log-Brownien. Nous présentons un formalisme simple et puissant qui permet de généraliser l'analyse à une grande classe de processus stochastiques, tels que les processus ARCH, de Lévy et ceux à sauts. Nous étudions également le cas des processus Gaussiens corrélés, dont nous montrons qu'ils donnent une bonne description de trois indices boursiers (MATIF, CAC40, FTSE100). Notre résultat principal consiste en l'introduction du concept de stratégie optimale dans le sens d'une minimisation (fonctionnelle) du risque en fonction du portefeuille d'actions. Si le risque peut être annulé pour les processus `quasi-Gaussien' non-corrélés, dont le modèle de Black et Scholes est un exemple, cela n'est plus vrai dans le cas général, le risque résiduel permettant de proposer des coûts d'options "corrigés". En présence de très grandes fluctuations du marché telles que décrites par les processus de Lévy, de nouveaux critères pour fixer rationnellement le prix des options sont nécessaires et sont discutés. Nous appliquons notre méthode à d'autres types d'options, telles que `asiatiques', `américaines', et à de nouvelles options que nous introduisons comme les `options à deux étages'... L'inclusion des frais de transaction dans le formalisme conduit à l'introduction naturelle d'un temps caractéristique de transaction.

  20. Stochastic modeling of consumer preferences for health care institutions.

    PubMed

    Malhotra, N K

    1983-01-01

    This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.

  1. Calculation of stochastic broadening due to low mn magnetic perturbation in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  2. Fractional noise destroys or induces a stochastic bifurcation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qigui, E-mail: qgyang@scut.edu.cn; Zeng, Caibin, E-mail: zeng.cb@mail.scut.edu.cn; School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640

    2013-12-15

    Little seems to be known about the stochastic bifurcation phenomena of non-Markovian systems. Our intention in this paper is to understand such complex dynamics by a simple system, namely, the Black-Scholes model driven by a mixed fractional Brownian motion. The most interesting finding is that the multiplicative fractional noise not only destroys but also induces a stochastic bifurcation under some suitable conditions. So it opens a possible way to explore the theory of stochastic bifurcation in the non-Markovian framework.

  3. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  4. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  5. How input fluctuations reshape the dynamics of a biological switching system

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Kessler, David A.; Rappel, Wouter-Jan; Levine, Herbert

    2012-12-01

    An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.148101 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a non-negative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.

  6. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  7. Acting Irrationally to Improve Performance in Stochastic Worlds

    NASA Astrophysics Data System (ADS)

    Belavkin, Roman V.

    Despite many theories and algorithms for decision-making, after estimating the utility function the choice is usually made by maximising its expected value (the max EU principle). This traditional and 'rational' conclusion of the decision-making process is compared in this paper with several 'irrational' techniques that make choice in Monte-Carlo fashion. The comparison is made by evaluating the performance of simple decision-theoretic agents in stochastic environments. It is shown that not only the random choice strategies can achieve performance comparable to the max EU method, but under certain conditions the Monte-Carlo choice methods perform almost two times better than the max EU. The paper concludes by quoting evidence from recent cognitive modelling works as well as the famous decision-making paradoxes.

  8. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  9. On a stochastic control method for weakly coupled linear systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.

    1972-01-01

    The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.

  10. Patterns of Stochastic Behavior in Dynamically Unstable High-Dimensional Biochemical Networks

    PubMed Central

    Rosenfeld, Simon

    2009-01-01

    The question of dynamical stability and stochastic behavior of large biochemical networks is discussed. It is argued that stringent conditions of asymptotic stability have very little chance to materialize in a multidimensional system described by the differential equations of chemical kinetics. The reason is that the criteria of asymptotic stability (Routh-Hurwitz, Lyapunov criteria, Feinberg’s Deficiency Zero theorem) would impose the limitations of very high algebraic order on the kinetic rates and stoichiometric coefficients, and there are no natural laws that would guarantee their unconditional validity. Highly nonlinear, dynamically unstable systems, however, are not necessarily doomed to collapse, as a simple Jacobian analysis would suggest. It is possible that their dynamics may assume the form of pseudo-random fluctuations quite similar to a shot noise, and, therefore, their behavior may be described in terms of Langevin and Fokker-Plank equations. We have shown by simulation that the resulting pseudo-stochastic processes obey the heavy-tailed Generalized Pareto Distribution with temporal sequence of pulses forming the set of constituent-specific Poisson processes. Being applied to intracellular dynamics, these properties are naturally associated with burstiness, a well documented phenomenon in the biology of gene expression. PMID:19838330

  11. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  12. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  13. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  14. Feynman-Kac equation for anomalous processes with space- and time-dependent forces

    NASA Astrophysics Data System (ADS)

    Cairoli, Andrea; Baule, Adrian

    2017-04-01

    Functionals of a stochastic process Y(t) model many physical time-extensive observables, for instance particle positions, local and occupation times or accumulated mechanical work. When Y(t) is a normal diffusive process, their statistics are obtained as the solution of the celebrated Feynman-Kac equation. This equation provides the crucial link between the expected values of diffusion processes and the solutions of deterministic second-order partial differential equations. When Y(t) is non-Brownian, e.g. an anomalous diffusive process, generalizations of the Feynman-Kac equation that incorporate power-law or more general waiting time distributions of the underlying random walk have recently been derived. A general representation of such waiting times is provided in terms of a Lévy process whose Laplace exponent is directly related to the memory kernel appearing in the generalized Feynman-Kac equation. The corresponding anomalous processes have been shown to capture nonlinear mean square displacements exhibiting crossovers between different scaling regimes, which have been observed in numerous experiments on biological systems like migrating cells or diffusing macromolecules in intracellular environments. However, the case where both space- and time-dependent forces drive the dynamics of the generalized anomalous process has not been solved yet. Here, we present the missing derivation of the Feynman-Kac equation in such general case by using the subordination technique. Furthermore, we discuss its extension to functionals explicitly depending on time, which are of particular relevance for the stochastic thermodynamics of anomalous diffusive systems. Exact results on the work fluctuations of a simple non-equilibrium model are obtained. An additional aim of this paper is to provide a pedagogical introduction to Lévy processes, semimartingales and their associated stochastic calculus, which underlie the mathematical formulation of anomalous diffusion as a subordinated process.

  15. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.

  16. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  17. On Nash Equilibria in Stochastic Games

    DTIC Science & Technology

    2003-10-01

    Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely

  18. Discrete stochastic simulation methods for chemically reacting systems.

    PubMed

    Cao, Yang; Samuels, David C

    2009-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.

  19. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  20. Complete description of all self-similar models driven by Lévy stable noise

    NASA Astrophysics Data System (ADS)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina

    2005-01-01

    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  1. Virtual volatility

    NASA Astrophysics Data System (ADS)

    Silva, A. Christian; Prange, Richard E.

    2007-03-01

    We introduce the concept of virtual volatility. This simple but new measure shows how to quantify the uncertainty in the forecast of the drift component of a random walk. The virtual volatility also is a useful tool in understanding the stochastic process for a given portfolio. In particular, and as an example, we were able to identify mean reversion effect in our portfolio. Finally, we briefly discuss the potential practical effect of the virtual volatility on an investor asset allocation strategy.

  2. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    PubMed

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  3. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  4. A minimum stochastic model evaluating the interplay between population density and drift for species coexistence

    NASA Astrophysics Data System (ADS)

    Guariento, Rafael Dettogni; Caliman, Adriano

    2017-02-01

    Despite the general acknowledgment of the role of niche and stochastic process in community dynamics, the role of species relative abundances according to both perspectives may have different effects regarding coexistence patterns. In this study, we explore a minimum probabilistic stochastic model to determine the relationship of populations relative and total abundances with species chances to outcompete each other and their persistence in time (i.e., unstable coexistence). Our model is focused on the effects drift (i.e., random sampling of recruitment) under different scenarios of selection (i.e., fitness differences between species). Our results show that taking into account the stochasticity in demographic properties and conservation of individuals in closed communities (zero-sum assumption), initial population abundance can strongly influence species chances to outcompete each other, despite fitness inequalities between populations, and also, influence the period of coexistence of these species in a particular time interval. Systems carrying capacity can have an important role in species coexistence by exacerbating fitness inequalities and affecting the size of the period of coexistence. Overall, the simple stochastic formulation used in this study demonstrated that populations initial abundances could act as an equalizing mechanism, reducing fitness inequalities, which can favor species coexistence and even make less fitted species to be more likely to outcompete better-fitted species, and thus to dominate ecological communities in the absence of niche mechanisms. Although our model is restricted to a pair of interacting species, and overall conclusions are already predicted by the Neutral Theory of Biodiversity, our main objective was to derive a model that can explicitly show the functional relationship between population densities and community mono-dominance odds. Overall, our study provides a straightforward understanding of how a stochastic process (i.e., drift) may affect the expected outcome based on species selection (i.e., fitness inequalities among species) and the resulting outcome regarding unstable coexistence among species.

  5. Unreliable Retrial Queues in a Random Environment

    DTIC Science & Technology

    2007-09-01

    equivalent to the stochasticity of the matrix Ĝ. It is generally known from Perron - Frobenius theory that a given square ma- trix M is stochastic if and...only if its maximum positive eigenvalue (i.e., its Perron eigenvalue) sp(M) is equal to unity. A simple analytical condition that guarantees the

  6. Retention performance of green roofs in representative climates worldwide

    NASA Astrophysics Data System (ADS)

    Viola, F.; Hellies, M.; Deidda, R.

    2017-10-01

    The ongoing process of global urbanization contributes to an increase in stormwater runoff from impervious surfaces, threatening also water quality. Green roofs have been proved to be innovative stormwater management measures to partially restore natural states, enhancing interception, infiltration and evapotranspiration fluxes. The amount of water that is retained within green roofs depends not only on their depth, but also on the climate, which drives the stochastic soil moisture dynamic. In this context, a simple tool for assessing performance of green roofs worldwide in terms of retained water is still missing and highly desirable for practical assessments. The aim of this work is to explore retention performance of green roofs as a function of their depth and in different climate regimes. Two soil depths are investigated, one representing the intensive configuration and another representing the extensive one. The role of the climate in driving water retention has been represented by rainfall and potential evapotranspiration dynamics. A simple conceptual weather generator has been implemented and used for stochastic simulation of daily rainfall and potential evapotranspiration. Stochastic forcing is used as an input of a simple conceptual hydrological model for estimating long-term water partitioning between rainfall, runoff and actual evapotranspiration. Coupling the stochastic weather generator with the conceptual hydrological model, we assessed the amount of rainfall diverted into evapotranspiration for different combinations of annual rainfall and potential evapotranspiration in five representative climatic regimes. Results quantified the capabilities of green roofs in retaining rainfall and consequently in reducing discharges into sewer systems at an annual time scale. The role of substrate depth has been recognized to be crucial in determining green roofs retention performance, which in general increase from extensive to intensive settings. Looking at the role of climatic conditions, namely annual rainfall, potential evapotranspiration and their seasonality cycles, we found that they drive green roofs retention performance, which are the maxima when rainfall and temperature are in phase. Finally, we provide design charts for a first approximation of possible hydrological benefits deriving from the implementation of intensive or extensive green roofs in different world areas. As an example, 25 big cities have been indicated as benchmark case studies.

  7. Discrete and continuous models for tissue growth and shrinkage.

    PubMed

    Yates, Christian A

    2014-06-07

    The incorporation of domain growth into stochastic models of biological processes is of increasing interest to mathematical modellers and biologists alike. In many situations, especially in developmental biology, the growth of the underlying tissue domain plays an important role in the redistribution of particles (be they cells or molecules) which may move and react atop the domain. Although such processes have largely been modelled using deterministic, continuum models there is an increasing appetite for individual-based stochastic models which can capture the fine details of the biological movement processes which are being elucidated by modern experimental techniques, and also incorporate the inherent stochasticity of such systems. In this work we study a simple stochastic model of domain growth. From a basic version of this model, Hywood et al. (2013) were able to derive a Fokker-Plank equation (FPE) (in this case an advection-diffusion partial differential equation on a growing domain) which describes the evolution of the probability density of some tracer particles on the domain. We extend their work so that a variety of different domain growth mechanisms can be incorporated and demonstrate a good agreement between the mean tracer density and the solution of the FPE in each case. In addition we incorporate domain shrinkage (via element death) into our individual-level model and demonstrate that we are able to derive coefficients for the FPE in this case as well. For situations in which the drift and diffusion coefficients are not readily available we introduce a numerical coefficient estimation approach and demonstrate the accuracy of this approach by comparing it with situations in which an analytical solution is obtainable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. The role of noise in self-organized decision making by the true slime mold Physarum polycephalum.

    PubMed

    Meyer, Bernd; Ansorge, Cedrick; Nakagaki, Toshiyuki

    2017-01-01

    Self-organized mechanisms are frequently encountered in nature and known to achieve flexible, adaptive control and decision-making. Noise plays a crucial role in such systems: It can enable a self-organized system to reliably adapt to short-term changes in the environment while maintaining a generally stable behavior. This is fundamental in biological systems because they must strike a delicate balance between stable and flexible behavior. In the present paper we analyse the role of noise in the decision-making of the true slime mold Physarum polycephalum, an important model species for the investigation of computational abilities in simple organisms. We propose a simple biological experiment to investigate the reaction of P. polycephalum to time-variant risk factors and present a stochastic extension of an established mathematical model for P. polycephalum to analyze this experiment. It predicts that-due to the mechanism of stochastic resonance-noise can enable P. polycephalum to correctly assess time-variant risk factors, while the corresponding noise-free system fails to do so. Beyond the study of P. polycephalum we demonstrate that the influence of noise on self-organized decision-making is not tied to a specific organism. Rather it is a general property of the underlying process dynamics, which appears to be universal across a wide range of systems. Our study thus provides further evidence that stochastic resonance is a fundamental component of the decision-making in self-organized macroscopic and microscopic groups and organisms.

  9. The role of noise in self-organized decision making by the true slime mold Physarum polycephalum

    PubMed Central

    Ansorge, Cedrick; Nakagaki, Toshiyuki

    2017-01-01

    Self-organized mechanisms are frequently encountered in nature and known to achieve flexible, adaptive control and decision-making. Noise plays a crucial role in such systems: It can enable a self-organized system to reliably adapt to short-term changes in the environment while maintaining a generally stable behavior. This is fundamental in biological systems because they must strike a delicate balance between stable and flexible behavior. In the present paper we analyse the role of noise in the decision-making of the true slime mold Physarum polycephalum, an important model species for the investigation of computational abilities in simple organisms. We propose a simple biological experiment to investigate the reaction of P. polycephalum to time-variant risk factors and present a stochastic extension of an established mathematical model for P. polycephalum to analyze this experiment. It predicts that—due to the mechanism of stochastic resonance—noise can enable P. polycephalum to correctly assess time-variant risk factors, while the corresponding noise-free system fails to do so. Beyond the study of P. polycephalum we demonstrate that the influence of noise on self-organized decision-making is not tied to a specific organism. Rather it is a general property of the underlying process dynamics, which appears to be universal across a wide range of systems. Our study thus provides further evidence that stochastic resonance is a fundamental component of the decision-making in self-organized macroscopic and microscopic groups and organisms. PMID:28355213

  10. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  11. A general stochastic model for sporophytic self-incompatibility.

    PubMed

    Billiard, Sylvain; Tran, Viet Chi

    2012-01-01

    Disentangling the processes leading populations to extinction is a major topic in ecology and conservation biology. The difficulty to find a mate in many species is one of these processes. Here, we investigate the impact of self-incompatibility in flowering plants, where several inter-compatible classes of individuals exist but individuals of the same class cannot mate. We model pollen limitation through different relationships between mate availability and fertilization success. After deriving a general stochastic model, we focus on the simple case of distylous plant species where only two classes of individuals exist. We first study the dynamics of such a species in a large population limit and then, we look for an approximation of the extinction probability in small populations. This leads us to consider inhomogeneous random walks on the positive quadrant. We compare the dynamics of distylous species to self-fertile species with and without inbreeding depression, to obtain the conditions under which self-incompatible species can be less sensitive to extinction while they can suffer more pollen limitation. © Springer-Verlag 2011

  12. Delay chemical master equation: direct and closed-form solutions

    PubMed Central

    Leier, Andre; Marquez-Lago, Tatiana T.

    2015-01-01

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived. PMID:26345616

  13. Delay chemical master equation: direct and closed-form solutions.

    PubMed

    Leier, Andre; Marquez-Lago, Tatiana T

    2015-07-08

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived.

  14. Mutant number distribution in an exponentially growing population

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Antal, Tibor

    2015-01-01

    We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.

  15. Bayesian parameter estimation for stochastic models of biological cell migration

    NASA Astrophysics Data System (ADS)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  16. The Physics of Decision Making:. Stochastic Differential Equations as Models for Neural Dynamics and Evidence Accumulation in Cortical Circuits

    NASA Astrophysics Data System (ADS)

    Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.

    2010-03-01

    We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.

  17. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  18. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  19. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems

    PubMed Central

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.

    2017-01-01

    Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987

  20. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  1. Stochastic fluctuations and the detectability limit of network communities.

    PubMed

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  2. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  3. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  4. Noise effects in nonlinear biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert

    2012-01-01

    It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.

  5. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  6. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Sanguinetti, Guido; Grima, Ramon

    2017-03-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.

  7. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  8. Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity

    NASA Astrophysics Data System (ADS)

    Thual, S.; Majda, A.; Chen, N.

    2017-12-01

    The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.

  9. Quantum stochastic calculus associated with quadratic quantum noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less

  10. Extended H2 synthesis for multiple degree-of-freedom controllers

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Knospe, Carl R.

    1992-01-01

    H2 synthesis techniques are developed for a general multiple-input-multiple-output (MIMO) system subject to both stochastic and deterministic disturbances. The H2 synthesis is extended by incorporation of anticipated disturbances power-spectral-density information into the controller-design process, as well as by frequency weightings of generalized coordinates and control inputs. The methodology is applied to a simple single-input-multiple-output (SIMO) problem, analogous to the type of vibration isolation problem anticipated in microgravity research experiments.

  11. On orthogonality preserving quadratic stochastic operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhamedov, Farrukh; Taha, Muhammad Hafizuddin Mohd

    2015-05-15

    A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too.

  12. Threshold for extinction and survival in stochastic tumor immune system

    NASA Astrophysics Data System (ADS)

    Li, Dongxi; Cheng, Fangjuan

    2017-10-01

    This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.

  13. Simple map in action-angle coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima

    A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representationmore » is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)=((3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)) with amplitude {delta}=0.8x10{sup -5}. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1 m, the width of stochastic layer near the X-point is about 1.4 cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5{pi} on the separatrix. The average width of stochastic layer in action coordinate is 2.7x10{sup -4}, while the average area of the stochastic layer in action-angle phase space is 1.69017x10{sup -3}. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].« less

  14. Simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima

    2008-07-01

    A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representation is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)={(3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)} with amplitude δ =0.8×10-5. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1m, the width of stochastic layer near the X-point is about 1.4cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5π on the separatrix. The average width of stochastic layer in action coordinate is 2.7×10-4, while the average area of the stochastic layer in action-angle phase space is 1.69017×10-3. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].

  15. Using Dynamic Stochastic Modelling to Estimate Population Risk Factors in Infectious Disease: The Example of FIV in 15 Cat Populations

    PubMed Central

    Fouchet, David; Leblanc, Guillaume; Sauvage, Frank; Guiserix, Micheline; Poulet, Hervé; Pontier, Dominique

    2009-01-01

    Background In natural cat populations, Feline Immunodeficiency Virus (FIV) is transmitted through bites between individuals. Factors such as the density of cats within the population or the sex-ratio can have potentially strong effects on the frequency of fight between individuals and hence appear as important population risk factors for FIV. Methodology/Principal Findings To study such population risk factors, we present data on FIV prevalence in 15 cat populations in northeastern France. We investigate five key social factors of cat populations; the density of cats, the sex-ratio, the number of males and the mean age of males and females within the population. We overcome the problem of dependence in the infective status data using sexually-structured dynamic stochastic models. Only the age of males and females had an effect (p = 0.043 and p = 0.02, respectively) on the male-to-female transmission rate. Due to multiple tests, it is even likely that these effects are, in reality, not significant. Finally we show that, in our study area, the data can be explained by a very simple model that does not invoke any risk factor. Conclusion Our conclusion is that, in host-parasite systems in general, fluctuations due to stochasticity in the transmission process are naturally very large and may alone explain a larger part of the variability in observed disease prevalence between populations than previously expected. Finally, we determined confidence intervals for the simple model parameters that can be used to further aid in management of the disease. PMID:19888418

  16. A hierarchical stress release model for synthetic seismicity

    NASA Astrophysics Data System (ADS)

    Bebbington, Mark

    1997-06-01

    We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.

  17. Binary fingerprints at fluctuation-enhanced sensing.

    PubMed

    Chang, Hung-Chih; Kish, Laszlo B; King, Maria D; Kwan, Chiman

    2010-01-01

    We have developed a simple way to generate binary patterns based on spectral slopes in different frequency ranges at fluctuation-enhanced sensing. Such patterns can be considered as binary "fingerprints" of odors. The method has experimentally been demonstrated with a commercial semiconducting metal oxide (Taguchi) sensor exposed to bacterial odors (Escherichia coli and Anthrax-surrogate Bacillus subtilis) and processing their stochastic signals. With a single Taguchi sensor, the situations of empty chamber, tryptic soy agar (TSA) medium, or TSA with bacteria could be distinguished with 100% reproducibility. The bacterium numbers were in the range of 2.5 × 10(4)-10(6). To illustrate the relevance for ultra-low power consumption, we show that this new type of signal processing and pattern recognition task can be implemented by a simple analog circuitry and a few logic gates with total power consumption in the microWatts range.

  18. Variable classification in the LSST era: exploring a model for quasi-periodic light curves

    NASA Astrophysics Data System (ADS)

    Zinn, J. C.; Kochanek, C. S.; Kozłowski, S.; Udalski, A.; Szymański, M. K.; Soszyński, I.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Skowron, J.; Mróz, P.; Pawlak, M.

    2017-06-01

    The Large Synoptic Survey Telescope (LSST) is expected to yield ˜107 light curves over the course of its mission, which will require a concerted effort in automated classification. Stochastic processes provide one means of quantitatively describing variability with the potential advantage over simple light-curve statistics that the parameters may be physically meaningful. Here, we survey a large sample of periodic, quasi-periodic and stochastic Optical Gravitational Lensing Experiment-III variables using the damped random walk (DRW; CARMA(1,0)) and quasi-periodic oscillation (QPO; CARMA(2,1)) stochastic process models. The QPO model is described by an amplitude, a period and a coherence time-scale, while the DRW has only an amplitude and a time-scale. We find that the periodic and quasi-periodic stellar variables are generally better described by a QPO than a DRW, while quasars are better described by the DRW model. There are ambiguities in interpreting the QPO coherence time due to non-sinusoidal light-curve shapes, signal-to-noise ratio, error mischaracterizations and cadence. Higher order implementations of the QPO model that better capture light-curve shapes are necessary for the coherence time to have its implied physical meaning. Independent of physical meaning, the extra parameter of the QPO model successfully distinguishes most of the classes of periodic and quasi-periodic variables we consider.

  19. Robust authentication through stochastic femtosecond laser filament induced scattering surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haisu; Tzortzakis, Stelios, E-mail: stzortz@iesl.forth.gr; Materials Science and Technology Department, University of Crete, 71003 Heraklion

    2016-05-23

    We demonstrate a reliable authentication method by femtosecond laser filament induced scattering surfaces. The stochastic nonlinear laser fabrication nature results in unique authentication robust properties. This work provides a simple and viable solution for practical applications in product authentication, while also opens the way for incorporating such elements in transparent media and coupling those in integrated optical circuits.

  20. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  1. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  2. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  3. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  4. Valuation of exotic options in the framework of Levy processes

    NASA Astrophysics Data System (ADS)

    Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta

    2013-12-01

    In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.

  5. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  6. Turbulent fluctuations and the excitation of Z Cam outbursts

    NASA Astrophysics Data System (ADS)

    Ross, Johnathan; Latter, Henrik N.

    2017-09-01

    Z Cam variables are a subclass of dwarf nova that lie near a global bifurcation between outbursting ('limit cycle') and non-outbursting ('standstill') states. It is believed that variations in the secondary star's mass-injection rate instigate transitions between the two regimes. In this paper, we explore an alternative trigger for these transitions: stochastic fluctuations in the disc's turbulent viscosity. We employ simple one-zone and global viscous models which, though inappropriate for detailed matching to observed light curves, clearly indicate that turbulent disc fluctuations induce outbursts when the system is sufficiently close to the global bifurcation point. While the models easily produce the observed 'outburst/dip' pairs exhibited by Z Cam and Nova-like variables, they struggle to generate long trains of outbursts. We conclude that mass transfer variability is the dominant physical process determining the overall Z Cam standstill/outburst pattern, but that viscous stochasticity provides an additional ingredient explaining some of the secondary features observed.

  7. Fluctuation relations between hierarchical kinetically equivalent networks with Arrhenius-type transitions and their roles in systems and structural biology.

    PubMed

    Deng, De-Ming; Lu, Yi-Ta; Chang, Cheng-Hung

    2017-06-01

    The legality of using simple kinetic schemes to determine the stochastic properties of a complex system depends on whether the fluctuations generated from hierarchical equivalent schemes are consistent with one another. To analyze this consistency, we perform lumping processes on the stochastic differential equations and the generalized fluctuation-dissipation theorem and apply them to networks with the frequently encountered Arrhenius-type transition rates. The explicit Langevin force derived from those networks enables us to calculate the state fluctuations caused by the intrinsic and extrinsic noises on the free energy surface and deduce their relations between kinetically equivalent networks. In addition to its applicability to wide classes of network related systems, such as those in structural and systems biology, the result sheds light on the fluctuation relations for general physical variables in Keizer's canonical theory.

  8. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  9. Dissipation in microwave quantum circuits with hybrid nanowire Josephson elements

    NASA Astrophysics Data System (ADS)

    Mugnai, D.; Ranfagni, A.; Agresti, A.

    2017-04-01

    Recent experiments on hybrid Josephson junctions have made the argument a topical subject. However, a quantity which remains still unknown is the tunneling (or response) time, which is strictly connected to the role that dissipation plays in the dynamics of the complete system. A simple way for evaluating dissipation in microwave circuits, previously developed for describing the dynamics of conventional Josephson junctions, is now presented as suitable for application even to non-conventional junctions. The method is based on a stochastic model, as derived from the telegrapher's equation, and is particularly devoted to the case of junctions loaded by real transmission lines. When the load is constituted by lumped-constant circuits, a connection with the stochastic model is also maintained. The theoretical model demonstrated its ability to analyze both classically-allowed and forbidden processes, and has found a wide field of applicability, namely in all cases in which dissipative effects cannot be ignored.

  10. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.

    PubMed

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J

    2017-07-15

    Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Fast Simulation of Membrane Filtration by Combining Particle Retention Mechanisms and Network Models

    NASA Astrophysics Data System (ADS)

    Krupp, Armin; Griffiths, Ian; Please, Colin

    2016-11-01

    Porous membranes are used for their particle retention capabilities in a wide range of industrial filtration processes. The underlying mechanisms for particle retention are complex and often change during the filtration process, making it hard to predict the change in permeability of the membrane during the process. Recently, stochastic network models have been shown to predict the change in permeability based on retention mechanisms, but remain computationally intensive. We show that the averaged behaviour of such a stochastic network model can efficiently be computed using a simple partial differential equation. Moreover, we also show that the geometric structure of the underlying membrane and particle-size distribution can be represented in our model, making it suitable for modelling particle retention in interconnected membranes as well. We conclude by demonstrating the particular application to microfluidic filtration, where the model can be used to efficiently compute a probability density for flux measurements based on the geometry of the pores and particles. A. U. K. is grateful for funding from Pall Corporation and the Mathematical Institute, University of Oxford. I.M.G. gratefully acknowledges support from the Royal Society through a University Research Fellowship.

  12. Stochastic models for inferring genetic regulation from microarray gene expression data.

    PubMed

    Tian, Tianhai

    2010-03-01

    Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.

  13. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  14. Cox process representation and inference for stochastic reaction-diffusion processes

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2016-05-01

    Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.

  15. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  16. Extension of moment projection method to the fragmentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    2017-04-15

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less

  17. Didactic discussion of stochastic resonance effects and weak signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adair, R.K.

    1996-12-01

    A simple, paradigmatic, model is used to illustrate some general properties of effects subsumed under the label stochastic resonance. In particular, analyses of the transparent model show that (1) a small amount of noise added to a much larger signal can greatly increase the response to the signal, but (2) a weak signal added to much larger noise will not generate a substantial added response. The conclusions drawn from the model illustrate the general result that stochastic resonance effects do not provide an avenue for signals that are much smaller than noise to affect biology. A further analysis demonstrates themore » effects of small signals in the shifting of biologically important chemical equilibria under conditions where stochastic resonance effects are significant.« less

  18. Mean-Potential Law in Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Nałecz-Jawecki, Paweł; Miekisz, Jacek

    2018-01-01

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  19. Role of demographic stochasticity in a speciation model with sexual reproduction

    NASA Astrophysics Data System (ADS)

    Lafuerza, Luis F.; McKane, Alan J.

    2016-03-01

    Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localized clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizable effect, rendering the deterministic modeling insufficient to understand the phenotypic distribution.

  20. On some stochastic formulations and related statistical moments of pharmacokinetic models.

    PubMed

    Matis, J H; Wehrly, T E; Metzler, C M

    1983-02-01

    This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.

  1. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, John; Chorin, Alexandre J.; Crutchfield, William

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  2. Martingales, detrending data, and the efficient market hypothesis

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-01-01

    We discuss martingales, detrending data, and the efficient market hypothesis (EMH) for stochastic processes x( t) with arbitrary diffusion coefficients D( x, t). Beginning with x-independent drift coefficients R( t) we show that martingale stochastic processes generate uncorrelated, generally non-stationary increments. Generally, a test for a martingale is therefore a test for uncorrelated increments. A detrended process with an x-dependent drift coefficient is generally not a martingale, and so we extend our analysis to include the class of ( x, t)-dependent drift coefficients of interest in finance. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. And while a Markovian market has no memory to exploit and presumably cannot be beaten systematically, it has never been shown that martingale memory cannot be exploited in 3-point or higher correlations to beat the market. We generalize our Markov scaling solutions presented earlier, and also generalize the martingale formulation of the EMH to include ( x, t)-dependent drift in log returns. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama's paper on the EMH. We end with a discussion of Levy's characterization of Brownian motion and prove that an arbitrary martingale is topologically inequivalent to a Wiener process.

  3. Stochastic Community Assembly: Does It Matter in Microbial Ecology?

    PubMed

    Zhou, Jizhong; Ning, Daliang

    2017-12-01

    Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.

  4. A simple model of bipartite cooperation for ecological and organizational networks.

    PubMed

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  5. Impact of correlated magnetic noise on the detection of stochastic gravitational waves: Estimation based on a simple analytical model

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Taruya, Atsushi

    2017-07-01

    After the first direct detection of gravitational waves (GW), detection of the stochastic background of GWs is an important next step, and the first GW event suggests that it is within the reach of the second-generation ground-based GW detectors. Such a GW signal is typically tiny and can be detected by cross-correlating the data from two spatially separated detectors if the detector noise is uncorrelated. It has been advocated, however, that the global magnetic fields in the Earth-ionosphere cavity produce the environmental disturbances at low-frequency bands, known as Schumann resonances, which potentially couple with GW detectors. In this paper, we present a simple analytical model to estimate its impact on the detection of stochastic GWs. The model crucially depends on the geometry of the detector pair through the directional coupling, and we investigate the basic properties of the correlated magnetic noise based on the analytic expressions. The model reproduces the major trend of the recently measured global correlation between the GW detectors via magnetometer. The estimated values of the impact of correlated noise also match those obtained from the measurement. Finally, we give an implication to the detection of stochastic GWs including upcoming detectors, KAGRA and LIGO India. The model suggests that LIGO Hanford-Virgo and Virgo-KAGRA pairs are possibly less sensitive to the correlated noise and can achieve a better sensitivity to the stochastic GW signal in the most pessimistic case.

  6. Estimating the effects of harmonic voltage fluctuations on the temperature rise of squirrel-cage motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emanuel, A.E.

    1991-03-01

    This article presents a preliminary analysis of the effect of randomly varying harmonic voltages on the temperature rise of squirrel-cage motors. The stochastic process of random variations of harmonic voltages is defined by means of simple statistics (mean, standard deviation, type of distribution). Computational models based on a first-order approximation of the motor losses and on the Monte Carlo method yield results which prove that equipment with large thermal time-constant is capable of withstanding for a short period of time larger distortions than THD = 5%.

  7. Testing for unit root bilinearity in the Brazilian stock market

    NASA Astrophysics Data System (ADS)

    Tabak, Benjamin M.

    2007-11-01

    In this paper a simple test for detecting bilinearity in a stochastic unit root process is used to test for the presence of nonlinear unit roots in Brazilian equity shares. The empirical evidence for a set of 53 individual stocks, after adjusting for GARCH effects, suggests that for more than 66%, the hypothesis of unit root bilinearity is accepted. Therefore, the dynamics of Brazilian share prices is in conformity with this type of nonlinearity. These nonlinearities in spot prices may emerge due to the sophistication of the derivatives market.

  8. Limits to Forecasting Precision for Outbreaks of Directly Transmitted Diseases

    PubMed Central

    Drake, John M

    2006-01-01

    Background Early warning systems for outbreaks of infectious diseases are an important application of the ecological theory of epidemics. A key variable predicted by early warning systems is the final outbreak size. However, for directly transmitted diseases, the stochastic contact process by which outbreaks develop entails fundamental limits to the precision with which the final size can be predicted. Methods and Findings I studied how the expected final outbreak size and the coefficient of variation in the final size of outbreaks scale with control effectiveness and the rate of infectious contacts in the simple stochastic epidemic. As examples, I parameterized this model with data on observed ranges for the basic reproductive ratio (R 0) of nine directly transmitted diseases. I also present results from a new model, the simple stochastic epidemic with delayed-onset intervention, in which an initially supercritical outbreak (R 0 > 1) is brought under control after a delay. Conclusion The coefficient of variation of final outbreak size in the subcritical case (R 0 < 1) will be greater than one for any outbreak in which the removal rate is less than approximately 2.41 times the rate of infectious contacts, implying that for many transmissible diseases precise forecasts of the final outbreak size will be unattainable. In the delayed-onset model, the coefficient of variation (CV) was generally large (CV > 1) and increased with the delay between the start of the epidemic and intervention, and with the average outbreak size. These results suggest that early warning systems for infectious diseases should not focus exclusively on predicting outbreak size but should consider other characteristics of outbreaks such as the timing of disease emergence. PMID:16435887

  9. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  10. The open quantum Brownian motions

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2014-09-01

    Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises

  11. Stochastic Adaptive Particle Beam Tracker Using Meer Filter Feedback.

    DTIC Science & Technology

    1986-12-01

    breakthrough required in controlling the beam location. In 1983, Zicker (27] conducted a feasibility study of a simple proportional gain controller... Zicker synthesized his stochastic controller designs from a deterministic optimal LQ controller assuming full state feedback. An LQ controller is a...34Merge" Method 2.5 Simlifying the eer Filter a Zicker ran a performance analysis on the Meer filter and found the Meer filter virtually insensitive to

  12. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  13. Dynamics of stochastic SEIS epidemic model with varying population size

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Wei, Fengying

    2016-12-01

    We introduce the stochasticity into a deterministic model which has state variables susceptible-exposed-infected with varying population size in this paper. The infected individuals could return into susceptible compartment after recovering. We show that the stochastic model possesses a unique global solution under building up a suitable Lyapunov function and using generalized Itô's formula. The densities of the exposed and infected tend to extinction when some conditions are being valid. Moreover, the conditions of persistence to a global solution are derived when the parameters are subject to some simple criteria. The stochastic model admits a stationary distribution around the endemic equilibrium, which means that the disease will prevail. To check the validity of the main results, numerical simulations are demonstrated as end of this contribution.

  14. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  15. Mean-Potential Law in Evolutionary Games.

    PubMed

    Nałęcz-Jawecki, Paweł; Miękisz, Jacek

    2018-01-12

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  16. Noise-driven neuromorphic tuned amplifier.

    PubMed

    Fanelli, Duccio; Ginelli, Francesco; Livi, Roberto; Zagli, Niccoló; Zankoc, Clement

    2017-12-01

    We study a simple stochastic model of neuronal excitatory and inhibitory interactions. The model is defined on a directed lattice and internodes couplings are modulated by a nonlinear function that mimics the process of synaptic activation. We prove that such a system behaves as a fully tunable amplifier: the endogenous component of noise, stemming from finite size effects, seeds a coherent (exponential) amplification across the chain generating giant oscillations with tunable frequencies, a process that the brain could exploit to enhance, and eventually encode, different signals. On a wider perspective, the characterized amplification process could provide a reliable pacemaking mechanism for biological systems. The device extracts energy from the finite size bath and operates as an out of equilibrium thermal machine, under stationary conditions.

  17. Particle Acceleration in Mildly Relativistic Shearing Flows: The Interplay of Systematic and Stochastic Effects, and the Origin of the Extended High-energy Emission in AGN Jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ruo-Yu; Rieger, F. M.; Aharonian, F. A., E-mail: ruoyu@mpi-hd.mpg.de, E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: aharon@mpi-hd.mpg.de

    The origin of the extended X-ray emission in the large-scale jets of active galactic nuclei (AGNs) poses challenges to conventional models of acceleration and emission. Although electron synchrotron radiation is considered the most feasible radiation mechanism, the formation of the continuous large-scale X-ray structure remains an open issue. As astrophysical jets are expected to exhibit some turbulence and shearing motion, we here investigate the potential of shearing flows to facilitate an extended acceleration of particles and evaluate its impact on the resultant particle distribution. Our treatment incorporates systematic shear and stochastic second-order Fermi effects. We show that for typical parametersmore » applicable to large-scale AGN jets, stochastic second-order Fermi acceleration, which always accompanies shear particle acceleration, can play an important role in facilitating the whole process of particle energization. We study the time-dependent evolution of the resultant particle distribution in the presence of second-order Fermi acceleration, shear acceleration, and synchrotron losses using a simple Fokker–Planck approach and provide illustrations for the possible emergence of a complex (multicomponent) particle energy distribution with different spectral branches. We present examples for typical parameters applicable to large-scale AGN jets, indicating the relevance of the underlying processes for understanding the extended X-ray emission and the origin of ultrahigh-energy cosmic rays.« less

  18. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  19. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  20. A coordination theory for intelligent machines

    NASA Technical Reports Server (NTRS)

    Wang, Fei-Yue; Saridis, George N.

    1990-01-01

    A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.

  1. Asymmetry in power-law magnitude correlations.

    PubMed

    Podobnik, Boris; Horvatić, Davor; Tenenbaum, Joel N; Stanley, H Eugene

    2009-07-01

    Time series of increments can be created in a number of different ways from a variety of physical phenomena. For example, in the phenomenon of volatility clustering-well-known in finance-magnitudes of adjacent increments are correlated. Moreover, in some time series, magnitude correlations display asymmetry with respect to an increment's sign: the magnitude of |x_{i}| depends on the sign of the previous increment x_{i-1} . Here we define a model-independent test to measure the statistical significance of any observed asymmetry. We propose a simple stochastic process characterized by a an asymmetry parameter lambda and a method for estimating lambda . We illustrate both the test and process by analyzing physiological data.

  2. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  3. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  4. Biased growth processes and the ``rich-get-richer'' principle

    NASA Astrophysics Data System (ADS)

    de Moura, Alessandro P.

    2004-05-01

    We study a simple stochastic system with a “rich-get-richer” behavior, in which there are 2 states, and N particles that are successively assigned to one of the states, with a probability pi that depends on the states’ occupation ni as pi = nγi /( nγ1 + nγ2 ) . We show that there is a phase transition as γ crosses the critical value γc =1 . For γ<1 , in the thermodynamic limit the occupations are approximately the same, n1 ≈ n2 . For γ>1 , however, a spontaneous symmetry breaking occurs, and the system goes to a highly clustered configuration, in which one of the states has almost all the particles. These results also hold for any finite number of states (not only two). We show that this “rich-get-richer” principle governs the growth dynamics in a simple model of gravitational aggregation, and we argue that the same is true in all growth processes mediated by long-range forces like gravity.

  5. Ultrasensitive dual phosphorylation dephosphorylation cycle kinetics exhibits canonical competition behavior

    NASA Astrophysics Data System (ADS)

    Huang, Qingdao; Qian, Hong

    2009-09-01

    We establish a mathematical model for a cellular biochemical signaling module in terms of a planar differential equation system. The signaling process is carried out by two phosphorylation-dephosphorylation reaction steps that share common kinase and phosphatase with saturated enzyme kinetics. The pair of equations is particularly simple in the present mathematical formulation, but they are singular. A complete mathematical analysis is developed based on an elementary perturbation theory. The dynamics exhibits the canonical competition behavior in addition to bistability. Although widely understood in ecological context, we are not aware of a full range of biochemical competition in a simple signaling network. The competition dynamics has broad implications to cellular processes such as cell differentiation and cancer immunoediting. The concepts of homogeneous and heterogeneous multisite phosphorylation are introduced and their corresponding dynamics are compared: there is no bistability in a heterogeneous dual phosphorylation system. A stochastic interpretation is also provided that further gives intuitive understanding of the bistable behavior inside the cells.

  6. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  7. Attempts at a numerical realisation of stochastic differential equations containing Preisach operator

    NASA Astrophysics Data System (ADS)

    McCarthy, S.; Rachinskii, D.

    2011-01-01

    We describe two Euler type numerical schemes obtained by discretisation of a stochastic differential equation which contains the Preisach memory operator. Equations of this type are of interest in areas such as macroeconomics and terrestrial hydrology where deterministic models containing the Preisach operator have been developed but do not fully encapsulate stochastic aspects of the area. A simple price dynamics model is presented as one motivating example for our studies. Some numerical evidence is given that the two numerical schemes converge to the same limit as the time step decreases. We show that the Preisach term introduces a damping effect which increases on the parts of the trajectory demonstrating a stronger upwards or downwards trend. The results are preliminary to a broader programme of research of stochastic differential equations with the Preisach hysteresis operator.

  8. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  9. The Sense of Confidence during Probabilistic Learning: A Normative Account.

    PubMed

    Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas

    2015-06-01

    Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.

  10. The Sense of Confidence during Probabilistic Learning: A Normative Account

    PubMed Central

    Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas

    2015-01-01

    Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process. PMID:26076466

  11. Quantum simulation of a quantum stochastic walk

    NASA Astrophysics Data System (ADS)

    Govia, Luke C. G.; Taketani, Bruno G.; Schuhmacher, Peter K.; Wilhelm, Frank K.

    2017-03-01

    The study of quantum walks has been shown to have a wide range of applications in areas such as artificial intelligence, the study of biological processes, and quantum transport. The quantum stochastic walk (QSW), which allows for incoherent movement of the walker, and therefore, directionality, is a generalization on the fully coherent quantum walk. While a QSW can always be described in Lindblad formalism, this does not mean that it can be microscopically derived in the standard weak-coupling limit under the Born-Markov approximation. This restricts the class of QSWs that can be experimentally realized in a simple manner. To circumvent this restriction, we introduce a technique to simulate open system evolution on a fully coherent quantum computer, using a quantum trajectories style approach. We apply this technique to a broad class of QSWs, and show that they can be simulated with minimal experimental resources. Our work opens the path towards the experimental realization of QSWs on large graphs with existing quantum technologies.

  12. Complex dynamics of selection and cellular memory in adaptation to a changing environment

    NASA Astrophysics Data System (ADS)

    Kussell, Edo; Lin, Wei-Hsiang

    We study a synthetic evolutionary system in bacteria in which an antibiotic resistance gene is controlled by a stochastic on/off switching promoter. At the population level, this system displays all the basic ingredients for evolutionary selection, including diversity, fitness differences, and heritability. At the single cell level, physiological processes can modulate the ability of selection to act. We expose the stochastic switching strains to pulses of antibiotics of different durations in periodically changing environments using microfluidics. Small populations are tracked over a large number of periods at single cell resolution, allowing the visualization and quantification of selective sweeps and counter-sweeps at the population level, as well as detailed single cell analysis. A simple model is introduced to predict long-term population growth rates from single cell measurements, and reveals unexpected aspects of population dynamics, including cellular memory that acts on a fast timescale to modulate growth rates. This work is supported by NIH Grant No. R01-GM097356.

  13. Effect of wet tropospheric path delays on estimation of geodetic baselines in the Gulf of California using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.

    1988-01-01

    Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.

  14. libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.

    PubMed

    Ji, Xinglai; Xu, Ying

    2006-01-01

    Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/

  15. The Impact of Competing Time Delays in Stochastic Coordination Problems

    NASA Astrophysics Data System (ADS)

    Korniss, G.; Hunt, D.; Szymanski, B. K.

    2011-03-01

    Coordinating, distributing, and balancing resources in coupled systems is a complex task as these operations are very sensitive to time delays. Delays are present in most real communication and information systems, including info-social and neuro-biological networks, and can be attributed to both non-zero transmission times between different units of the system and to non-zero times it takes to process the information and execute the desired action at the individual units. Here, we investigate the importance and impact of these two types of delays in a simple coordination (synchronization) problem in a noisy environment. We establish the scaling theory for the phase boundary of synchronization and for the steady-state fluctuations in the synchronizable regime. Further, we provide the asymptotic behavior near the boundary of the synchronizable regime. Our results also imply the potential for optimization and trade-offs in stochastic synchronization and coordination problems with time delays. Supported in part by DTRA, ARL, and ONR.

  16. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  17. The structure of tropical forests and sphere packings

    PubMed Central

    Jahn, Markus Wilhelm; Dobner, Hans-Jürgen; Wiegand, Thorsten; Huth, Andreas

    2015-01-01

    The search for simple principles underlying the complex architecture of ecological communities such as forests still challenges ecological theorists. We use tree diameter distributions—fundamental for deriving other forest attributes—to describe the structure of tropical forests. Here we argue that tree diameter distributions of natural tropical forests can be explained by stochastic packing of tree crowns representing a forest crown packing system: a method usually used in physics or chemistry. We demonstrate that tree diameter distributions emerge accurately from a surprisingly simple set of principles that include site-specific tree allometries, random placement of trees, competition for space, and mortality. The simple static model also successfully predicted the canopy structure, revealing that most trees in our two studied forests grow up to 30–50 m in height and that the highest packing density of about 60% is reached between the 25- and 40-m height layer. Our approach is an important step toward identifying a minimal set of processes responsible for generating the spatial structure of tropical forests. PMID:26598678

  18. Statistics of Infima and Stopping Times of Entropy Production and Applications to Active Molecular Processes

    NASA Astrophysics Data System (ADS)

    Neri, Izaak; Roldán, Édgar; Jülicher, Frank

    2017-01-01

    We study the statistics of infima, stopping times, and passage probabilities of entropy production in nonequilibrium steady states, and we show that they are universal. We consider two examples of stopping times: first-passage times of entropy production and waiting times of stochastic processes, which are the times when a system reaches a given state for the first time. Our main results are as follows: (i) The distribution of the global infimum of entropy production is exponential with mean equal to minus Boltzmann's constant; (ii) we find exact expressions for the passage probabilities of entropy production; (iii) we derive a fluctuation theorem for stopping-time distributions of entropy production. These results have interesting implications for stochastic processes that can be discussed in simple colloidal systems and in active molecular processes. In particular, we show that the timing and statistics of discrete chemical transitions of molecular processes, such as the steps of molecular motors, are governed by the statistics of entropy production. We also show that the extreme-value statistics of active molecular processes are governed by entropy production; for example, we derive a relation between the maximal excursion of a molecular motor against the direction of an external force and the infimum of the corresponding entropy-production fluctuations. Using this relation, we make predictions for the distribution of the maximum backtrack depth of RNA polymerases, which follow from our universal results for entropy-production infima.

  19. Speech parts as Poisson processes.

    PubMed

    Badalamenti, A F

    2001-09-01

    This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text.

  20. Simple stochastic simulation.

    PubMed

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  1. Stochastic differential equation model for linear growth birth and death processes with immigration and emigration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granita, E-mail: granitafc@gmail.com; Bahar, A.

    This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.

  2. Interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies. The optimum strategy is also compared to the strategies used by subjects in a pilot experiment.

  3. An estimator for the relative entropy rate of path measures for stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  4. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  5. Reflecting metallic metasurfaces designed with stochastic optimization as waveplates for manipulating light polarization

    NASA Astrophysics Data System (ADS)

    Haberko, Jakub; Wasylczyk, Piotr

    2018-03-01

    We demonstrate that a stochastic optimization algorithm with a properly chosen, weighted fitness function, following a global variation of parameters upon each step can be used to effectively design reflective polarizing optical elements. Two sub-wavelength metallic metasurfaces, corresponding to broadband half- and quarter-waveplates are demonstrated with simple structure topology, a uniform metallic coating and with the design suited for the currently available microfabrication techniques, such as ion milling or 3D printing.

  6. Behavior of a stochastic SIR epidemic model with saturated incidence and vaccination rules

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Li, Yang; Zhang, Qingling; Li, Aihua

    2018-07-01

    In this paper, the threshold behavior of a susceptible-infected-recovered (SIR) epidemic model with stochastic perturbation is investigated. Firstly, it is obtained that the system has a unique global positive solution with any positive initial value. Random effect may lead to disease extinction under a simple condition. Subsequently, sufficient condition for persistence has been established in the mean of the disease. Finally, some numerical simulations are carried out to confirm the analytical results.

  7. A Simple "Boxed Molecular Kinetics" Approach To Accelerate Rare Events in the Stochastic Kinetic Master Equation.

    PubMed

    Shannon, Robin; Glowacki, David R

    2018-02-15

    The chemical master equation is a powerful theoretical tool for analyzing the kinetics of complex multiwell potential energy surfaces in a wide range of different domains of chemical kinetics spanning combustion, atmospheric chemistry, gas-surface chemistry, solution phase chemistry, and biochemistry. There are two well-established methodologies for solving the chemical master equation: a stochastic "kinetic Monte Carlo" approach and a matrix-based approach. In principle, the results yielded by both approaches are identical; the decision of which approach is better suited to a particular study depends on the details of the specific system under investigation. In this Article, we present a rigorous method for accelerating stochastic approaches by several orders of magnitude, along with a method for unbiasing the accelerated results to recover the "true" value. The approach we take in this paper is inspired by the so-called "boxed molecular dynamics" (BXD) method, which has previously only been applied to accelerate rare events in molecular dynamics simulations. Here we extend BXD to design a simple algorithmic strategy for accelerating rare events in stochastic kinetic simulations. Tests on a number of systems show that the results obtained using the BXD rare event strategy are in good agreement with unbiased results. To carry out these tests, we have implemented a kinetic Monte Carlo approach in MESMER, which is a cross-platform, open-source, and freely available master equation solver.

  8. Stochastic Nature in Cellular Processes

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Liu, Sheng-Jun; Wang, Qi; Yan, Shi-Wei; Geng, Yi-Zhao; Sakata, Fumihiko; Gao, Xing-Fa

    2011-11-01

    The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.

  9. Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ray, Laura R.

    1991-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.

  10. Communication: Limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2011-11-01

    It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation.

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. Dynamical Stochastic Processes of Returns in Financial Markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min

    2006-03-01

    We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.

  13. Wikipedia editing dynamics.

    PubMed

    Gandica, Y; Carvalho, J; Sampaio Dos Aidos, F

    2015-01-01

    A model for the probabilistic function followed in editing Wikipedia is presented and compared with simulations and real data. It is argued that the probability of editing is proportional to the editor's number of previous edits (preferential attachment), to the editor's fitness, and to an aging factor. Using these simple ingredients, it is possible to reproduce the results obtained for Wikipedia editing dynamics for a collection of single pages as well as the averaged results. Using a stochastic process framework, a recursive equation was obtained for the average of the number of edits per editor that seems to describe the editing behavior in Wikipedia.

  14. Wikipedia editing dynamics

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Carvalho, J.; Sampaio dos Aidos, F.

    2015-01-01

    A model for the probabilistic function followed in editing Wikipedia is presented and compared with simulations and real data. It is argued that the probability of editing is proportional to the editor's number of previous edits (preferential attachment), to the editor's fitness, and to an aging factor. Using these simple ingredients, it is possible to reproduce the results obtained for Wikipedia editing dynamics for a collection of single pages as well as the averaged results. Using a stochastic process framework, a recursive equation was obtained for the average of the number of edits per editor that seems to describe the editing behavior in Wikipedia.

  15. Development of a Simple Image Processing Application that Makes Abdominopelvic Tumor Visible on Positron Emission Tomography/Computed Tomography Image.

    PubMed

    Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.

  16. Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.

    2009-12-01

    A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.

  17. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  18. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  19. Stochasticity in materials structure, properties, and processing—A review

    NASA Astrophysics Data System (ADS)

    Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai

    2018-03-01

    We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.

  20. Predicting Rift Valley Fever Inter-epidemic Activities and Outbreak Patterns: Insights from a Stochastic Host-Vector Model

    PubMed Central

    Pedro, Sansao A.; Abelman, Shirley; Tonnang, Henri E. Z.

    2016-01-01

    Rift Valley fever (RVF) outbreaks are recurrent, occurring at irregular intervals of up to 15 years at least in East Africa. Between outbreaks disease inter-epidemic activities exist and occur at low levels and are maintained by female Aedes mcintoshi mosquitoes which transmit the virus to their eggs leading to disease persistence during unfavourable seasons. Here we formulate and analyse a full stochastic host-vector model with two routes of transmission: vertical and horizontal. By applying branching process theory we establish novel relationships between the basic reproduction number, R0, vertical transmission and the invasion and extinction probabilities. Optimum climatic conditions and presence of mosquitoes have not fully explained the irregular oscillatory behaviour of RVF outbreaks. Using our model without seasonality and applying van Kampen system-size expansion techniques, we provide an analytical expression for the spectrum of stochastic fluctuations, revealing how outbreaks multi-year periodicity varies with the vertical transmission. Our theory predicts complex fluctuations with a dominant period of 1 to 10 years which essentially depends on the efficiency of vertical transmission. Our predictions are then compared to temporal patterns of disease outbreaks in Tanzania, Kenya and South Africa. Our analyses show that interaction between nonlinearity, stochasticity and vertical transmission provides a simple but plausible explanation for the irregular oscillatory nature of RVF outbreaks. Therefore, we argue that while rainfall might be the major determinant for the onset and switch-off of an outbreak, the occurrence of a particular outbreak is also a result of a build up phenomena that is correlated to vertical transmission efficiency. PMID:28002417

  1. Relevance of phenotypic noise to adaptation and evolution.

    PubMed

    Kaneko, K; Furusawa, C

    2008-09-01

    Biological processes are inherently noisy, as highlighted in recent measurements of stochasticity in gene expression. Here, the authors show that such phenotypic noise is essential to the adaptation of organisms to a variety of environments and also to the evolution of robustness against mutations. First, the authors show that for any growing cell showing stochastic gene expression, the adaptive cellular state is inevitably selected by noise, without the use of a specific signal transduction network. In general, changes in any protein concentration in a cell are products of its synthesis minus dilution and degradation, both of which are proportional to the rate of cell growth. In an adaptive state, both the synthesis and dilution terms of proteins are large, and so the adaptive state is less affected by stochasticity in gene expression, whereas for a non-adaptive state, both terms are smaller, and so cells are easily knocked out of their original state by noise. This leads to a novel, generic mechanism for the selection of adaptive states. The authors have confirmed this selection by model simulations. Secondly, the authors consider the evolution of gene networks to acquire robustness of the phenotype against noise and mutation. Through simulations using a simple stochastic gene expression network that undergoes mutation and selection, the authors show that a threshold level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during growth and development shapes any network's robustness, not only to noise but also to mutations. The authors also establish a relationship between developmental and mutational robustness.

  2. Predicting Rift Valley Fever Inter-epidemic Activities and Outbreak Patterns: Insights from a Stochastic Host-Vector Model.

    PubMed

    Pedro, Sansao A; Abelman, Shirley; Tonnang, Henri E Z

    2016-12-01

    Rift Valley fever (RVF) outbreaks are recurrent, occurring at irregular intervals of up to 15 years at least in East Africa. Between outbreaks disease inter-epidemic activities exist and occur at low levels and are maintained by female Aedes mcintoshi mosquitoes which transmit the virus to their eggs leading to disease persistence during unfavourable seasons. Here we formulate and analyse a full stochastic host-vector model with two routes of transmission: vertical and horizontal. By applying branching process theory we establish novel relationships between the basic reproduction number, R0, vertical transmission and the invasion and extinction probabilities. Optimum climatic conditions and presence of mosquitoes have not fully explained the irregular oscillatory behaviour of RVF outbreaks. Using our model without seasonality and applying van Kampen system-size expansion techniques, we provide an analytical expression for the spectrum of stochastic fluctuations, revealing how outbreaks multi-year periodicity varies with the vertical transmission. Our theory predicts complex fluctuations with a dominant period of 1 to 10 years which essentially depends on the efficiency of vertical transmission. Our predictions are then compared to temporal patterns of disease outbreaks in Tanzania, Kenya and South Africa. Our analyses show that interaction between nonlinearity, stochasticity and vertical transmission provides a simple but plausible explanation for the irregular oscillatory nature of RVF outbreaks. Therefore, we argue that while rainfall might be the major determinant for the onset and switch-off of an outbreak, the occurrence of a particular outbreak is also a result of a build up phenomena that is correlated to vertical transmission efficiency.

  3. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  4. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  5. Noise and Dissipation on Coadjoint Orbits

    NASA Astrophysics Data System (ADS)

    Arnaudon, Alexis; De Castro, Alex L.; Holm, Darryl D.

    2018-02-01

    We derive and study stochastic dissipative dynamics on coadjoint orbits by incorporating noise and dissipation into mechanical systems arising from the theory of reduction by symmetry, including a semidirect product extension. Random attractors are found for this general class of systems when the Lie algebra is semi-simple, provided the top Lyapunov exponent is positive. We study in details two canonical examples, the free rigid body and the heavy top, whose stochastic integrable reductions are found and numerical simulations of their random attractors are shown.

  6. Solution Methods for Stochastic Dynamic Linear Programs.

    DTIC Science & Technology

    1980-12-01

    16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED

  7. Noise shaping in populations of coupled model neurons.

    PubMed

    Mar, D J; Chow, C C; Gerstner, W; Adams, R W; Collins, J J

    1999-08-31

    Biological information-processing systems, such as populations of sensory and motor neurons, may use correlations between the firings of individual elements to obtain lower noise levels and a systemwide performance improvement in the dynamic range or the signal-to-noise ratio. Here, we implement such correlations in networks of coupled integrate-and-fire neurons using inhibitory coupling and demonstrate that this can improve the system dynamic range and the signal-to-noise ratio in a population rate code. The improvement can surpass that expected for simple averaging of uncorrelated elements. A theory that predicts the resulting power spectrum is developed in terms of a stochastic point-process model in which the instantaneous population firing rate is modulated by the coupling between elements.

  8. Mean-Field-Game Model for Botnet Defense in Cyber-Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolokoltsov, V. N., E-mail: v.kolokoltsov@warwick.ac.uk; Bensoussan, A.

    We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customersmore » (say, switch on or out the defence system) is much faster that the infection rates.« less

  9. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    PubMed

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  10. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    NASA Astrophysics Data System (ADS)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  11. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  12. Simple stochastic cellular automaton model for starved beds and implications about formation of sand topographic features in terms of sand flux

    NASA Astrophysics Data System (ADS)

    Endo, Noritaka

    2016-12-01

    A simple stochastic cellular automaton model is proposed for simulating bedload transport, especially for cases with a low transport rate and where available sediments are very sparse on substrates in a subaqueous system. Numerical simulations show that the bed type changes from sheet flow through sand patches to ripples as the amount of sand increases; this is consistent with observations in flume experiments and in the field. Without changes in external conditions, the sand flux calculated for a given amount of sand decreases over time as bedforms develop from a flat bed. This appears to be inconsistent with the general understanding that sand flux remains unchanged under the constant-fluid condition, but it is consistent with the previous experimental data. For areas of low sand abundance, the sand flux versus sand amount (flux-density relation) in the simulation shows a single peak with an abrupt decrease, followed by a long tail; this is very similar to the flux-density relation seen in automobile traffic flow. This pattern (the relation between segments of the curve and the corresponding bed states) suggests that sand sheets, sand patches, and sand ripples correspond respectively to the free-flow phase, congested phase, and jam phase of traffic flows. This implies that sand topographic features on starved beds are determined by the degree of interference between sand particles. Although the present study deals with simple cases only, this can provide a simplified but effective modeling of the more complicated sediment transport processes controlled by interference due to contact between grains, such as the pulsatory migration of grain-size bimodal mixtures with repetition of clustering and scattering.

  13. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    ERIC Educational Resources Information Center

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  14. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  15. A coupled stochastic rainfall-evapotranspiration model for hydrological impact analysis

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.

    2018-02-01

    A hydrological impact analysis concerns the study of the consequences of certain scenarios on one or more variables or fluxes in the hydrological cycle. In such an exercise, discharge is often considered, as floods originating from extremely high discharges often cause damage. Investigating the impact of extreme discharges generally requires long time series of precipitation and evapotranspiration to be used to force a rainfall-runoff model. However, such kinds of data may not be available and one should resort to stochastically generated time series, even though the impact of using such data on the overall discharge, and especially on the extreme discharge events, is not well studied. In this paper, stochastically generated rainfall and corresponding evapotranspiration time series, generated by means of vine copulas, are used to force a simple conceptual hydrological model. The results obtained are comparable to the modelled discharge using observed forcing data. Yet, uncertainties in the modelled discharge increase with an increasing number of stochastically generated time series used. Notwithstanding this finding, it can be concluded that using a coupled stochastic rainfall-evapotranspiration model has great potential for hydrological impact analysis.

  16. Numerical simulations in stochastic mechanics

    NASA Astrophysics Data System (ADS)

    McClendon, Marvin; Rabitz, Herschel

    1988-05-01

    The stochastic differential equation of Nelson's stochastic mechanics is integrated numerically for several simple quantum systems. The calculations are performed with use of Helfand and Greenside's method and pseudorandom numbers. The resulting trajectories are analyzed both individually and collectively to yield insight into momentum, uncertainty principles, interference, tunneling, quantum chaos, and common models of diatomic molecules from the stochastic quantization point of view. In addition to confirming Shucker's momentum theorem, these simulations illustrate, within the context of stochastic mechanics, the position-momentum and time-energy uncertainty relations, the two-slit diffraction pattern, exponential decay of an unstable system, and the greater degree of anticorrelation in a valence-bond model as compared with a molecular-orbital model of H2. The attempt to find exponential divergence of initially nearby trajectories, potentially useful as a criterion for quantum chaos, in a periodically forced oscillator is inconclusive. A way of computing excited energies from the ground-state motion is presented. In all of these studies the use of particle trajectories allows a more insightful interpretation of physical phenomena than is possible within traditional wave mechanics.

  17. Stochastic dynamics of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2010-05-01

    We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.

  18. BACKWARD ESTIMATION OF STOCHASTIC PROCESSES WITH FAILURE EVENTS AS TIME ORIGINS1

    PubMed Central

    Gary Chan, Kwun Chuen; Wang, Mei-Cheng

    2011-01-01

    Stochastic processes often exhibit sudden systematic changes in pattern a short time before certain failure events. Examples include increase in medical costs before death and decrease in CD4 counts before AIDS diagnosis. To study such terminal behavior of stochastic processes, a natural and direct way is to align the processes using failure events as time origins. This paper studies backward stochastic processes counting time backward from failure events, and proposes one-sample nonparametric estimation of the mean of backward processes when follow-up is subject to left truncation and right censoring. We will discuss benefits of including prevalent cohort data to enlarge the identifiable region and large sample properties of the proposed estimator with related extensions. A SEER–Medicare linked data set is used to illustrate the proposed methodologies. PMID:21359167

  19. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  20. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  1. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes—part III extensions and applications to kinetic theory and transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2017-08-01

    This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.

  2. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  3. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  4. A general time-dependent stochastic method for solving Parker's transport equation in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J.

    2010-12-01

    We present a detailed description of our newly developed stochastic approach for solving Parker's transport equation, which we believe is the first attempt to solve it with time dependence in 3-D, evolving from our 3-D steady state stochastic approach. Our formulation of this method is general and is valid for any type of heliospheric magnetic field, although we choose the standard Parker field as an example to illustrate the steps to calculate the transport of galactic cosmic rays. Our 3-D stochastic method is different from other stochastic approaches in the literature in several ways. For example, we employ spherical coordinates to integrate directly, which makes the code much more efficient by reducing coordinate transformations. What is more, the equivalence between our stochastic differential equations and Parker's transport equation is guaranteed by Ito's theorem in contrast to some other approaches. We generalize the technique for calculating particle flux based on the pseudoparticle trajectories for steady state solutions and for time-dependent solutions in 3-D. To validate our code, first we show that good agreement exists between solutions obtained by our steady state stochastic method and a traditional finite difference method. Then we show that good agreement also exists for our time-dependent method for an idealized and simplified heliosphere which has a Parker magnetic field and a simple initial condition for two different inner boundary conditions.

  5. Insights into the variability of nucleated amyloid polymerization by a minimalistic model of stochastic protein assembly

    NASA Astrophysics Data System (ADS)

    Eugène, Sarah; Xue, Wei-Feng; Robert, Philippe; Doumic, Marie

    2016-05-01

    Self-assembly of proteins into amyloid aggregates is an important biological phenomenon associated with human diseases such as Alzheimer's disease. Amyloid fibrils also have potential applications in nano-engineering of biomaterials. The kinetics of amyloid assembly show an exponential growth phase preceded by a lag phase, variable in duration as seen in bulk experiments and experiments that mimic the small volumes of cells. Here, to investigate the origins and the properties of the observed variability in the lag phase of amyloid assembly currently not accounted for by deterministic nucleation dependent mechanisms, we formulate a new stochastic minimal model that is capable of describing the characteristics of amyloid growth curves despite its simplicity. We then solve the stochastic differential equations of our model and give mathematical proof of a central limit theorem for the sample growth trajectories of the nucleated aggregation process. These results give an asymptotic description for our simple model, from which closed form analytical results capable of describing and predicting the variability of nucleated amyloid assembly were derived. We also demonstrate the application of our results to inform experiments in a conceptually friendly and clear fashion. Our model offers a new perspective and paves the way for a new and efficient approach on extracting vital information regarding the key initial events of amyloid formation.

  6. Probabilistic switching circuits in DNA

    PubMed Central

    Wilhelm, Daniel; Bruck, Jehoshua

    2018-01-01

    A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block. PMID:29339484

  7. A simple approximation of moments of the quasi-equilibrium distribution of an extended stochastic theta-logistic model with non-integer powers.

    PubMed

    Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi

    2016-01-01

    The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  9. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  10. A simple parameter can switch between different weak-noise-induced phenomena in a simple neuron model

    NASA Astrophysics Data System (ADS)

    Yamakou, Marius E.; Jost, Jürgen

    2017-10-01

    In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.

  11. Process-based quality for thermal spray via feedback control

    NASA Astrophysics Data System (ADS)

    Dykhuizen, R. C.; Neiser, R. A.

    2006-09-01

    Quality control of a thermal spray system manufacturing process is difficult due to the many input variables that need to be controlled. Great care must be taken to ensure that the process remains constant to obtain a consistent quality of the parts. Control is greatly complicated by the fact that measurement of particle velocities and temperatures is a noisy stochastic process. This article illustrates the application of quality control concepts to a wire flame spray process. A central feature of the real-time control system is an automatic feedback control scheme that provides fine adjustments to ensure that uncontrolled variations are accommodated. It is shown how the control vectors can be constructed from simple process maps to independently control particle velocity and temperature. This control scheme is shown to perform well in a real production environment. We also demonstrate that slight variations in the feed wire curvature can greatly influence the process. Finally, the geometry of the spray system and sensor must remain constant for the best reproducibility.

  12. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  13. Stochastic treatment of electron multiplication without scattering in dielectrics

    NASA Technical Reports Server (NTRS)

    Lin, D. L.; Beers, B. L.

    1981-01-01

    By treating the emission of optical phonons as a Markov process, a simple analytic method is developed for calculating the electronic ionization rate per unit length for dielectrics. The effects of scattering from acoustic and optical phonons are neglected. The treatment obtains universal functions in recursive form, the theory depending on only two dimensionless energy ratios. A comparison of the present work with other numerical approaches indicates that the effect of scattering becomes important only when the electric potential energy drop in a mean free path for optical-phonon emission is less than about 25% of the ionization potential. A comparison with Monte Carlo results is also given for Teflon.

  14. The Human Brain Project and neuromorphic computing

    PubMed Central

    Calimera, Andrea; Macii, Enrico; Poncino, Massimo

    Summary Understanding how the brain manages billions of processing units connected via kilometers of fibers and trillions of synapses, while consuming a few tens of Watts could provide the key to a completely new category of hardware (neuromorphic computing systems). In order to achieve this, a paradigm shift for computing as a whole is needed, which will see it moving away from current “bit precise” computing models and towards new techniques that exploit the stochastic behavior of simple, reliable, very fast, low-power computing devices embedded in intensely recursive architectures. In this paper we summarize how these objectives will be pursued in the Human Brain Project. PMID:24139655

  15. Modelling nematode movement using time-fractional dynamics.

    PubMed

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  16. Entropic forces in Brownian motion

    NASA Astrophysics Data System (ADS)

    Roos, Nico

    2014-12-01

    Interest in the concept of entropic forces has risen considerably since Verlinde proposed in 2011 to interpret the force in Newton's second law and gravity as entropic forces. Brownian motion—the motion of a small particle (pollen) driven by random impulses from the surrounding molecules—may be the first example of a stochastic process in which such forces are expected to emerge. In this article, it is shown that at least two types of entropic force can be identified in three-dimensional Brownian motion. This analysis yields simple derivations of known results of Brownian motion, Hooke's law, and—applying an external (non-radial) force—Curie's law and the Langevin-Debye equation.

  17. Energy-balance climate models

    NASA Technical Reports Server (NTRS)

    North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.

    1980-01-01

    An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.

  18. Energy balance climate models

    NASA Technical Reports Server (NTRS)

    North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.

    1981-01-01

    An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved, and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.

  19. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  20. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  1. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  2. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  3. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  4. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  5. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  6. Gene regulation and noise reduction by coupling of stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  7. Gene regulation and noise reduction by coupling of stochastic processes

    PubMed Central

    Hornos, José Eduardo M.; Reinitz, John

    2015-01-01

    Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447

  8. Gene regulation and noise reduction by coupling of stochastic processes.

    PubMed

    Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  9. Fractional Brownian motors and stochastic resonance

    NASA Astrophysics Data System (ADS)

    Goychuk, Igor; Kharchenko, Vasyl

    2012-05-01

    We study fluctuating tilt Brownian ratchets based on fractional subdiffusion in sticky viscoelastic media characterized by a power law memory kernel. Unlike the normal diffusion case, the rectification effect vanishes in the adiabatically slow modulation limit and optimizes in a driving frequency range. It is shown also that the anomalous rectification effect is maximal (stochastic resonance effect) at optimal temperature and can be of surprisingly good quality. Moreover, subdiffusive current can flow in the counterintuitive direction upon a change of temperature or driving frequency. The dependence of anomalous transport on load exhibits a remarkably simple universality.

  10. Controlling the high frequency response of H2 by ultra-short tailored laser pulses: A time-dependent configuration interaction study

    NASA Astrophysics Data System (ADS)

    Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann

    2016-01-01

    We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H2 treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a "non-harmonic" response of H2 to a laser pulse. Specifically, we will show how adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.

  11. Shortcuts to adiabaticity using flow fields

    NASA Astrophysics Data System (ADS)

    Patra, Ayoti; Jarzynski, Christopher

    2017-12-01

    A shortcut to adiabaticity is a recipe for generating adiabatic evolution at an arbitrary pace. Shortcuts have been developed for quantum, classical and (most recently) stochastic dynamics. A shortcut might involve a counterdiabatic (CD) Hamiltonian that causes a system to follow the adiabatic evolution at all times, or it might utilize a fast-forward (FF) potential, which returns the system to the adiabatic path at the end of the process. We develop a general framework for constructing shortcuts to adiabaticity from flow fields that describe the desired adiabatic evolution. Our approach encompasses quantum, classical and stochastic dynamics, and provides surprisingly compact expressions for both CD Hamiltonians and FF potentials. We illustrate our method with numerical simulations of a model system, and we compare our shortcuts with previously obtained results. We also consider the semiclassical connections between our quantum and classical shortcuts. Our method, like the FF approach developed by previous authors, is susceptible to singularities when applied to excited states of quantum systems; we propose a simple, intuitive criterion for determining whether these singularities will arise, for a given excited state.

  12. Controlling the high frequency response of H{sub 2} by ultra-short tailored laser pulses: A time-dependent configuration interaction study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann, E-mail: klamroth@uni-potsdam.de

    2016-01-28

    We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H{sub 2} treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a “non-harmonic” response of H{sub 2} to a laser pulse. Specifically, we will show howmore » adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.« less

  13. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  14. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  15. Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.

    PubMed

    Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F

    2002-04-15

    A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.

  16. Noise-induced transitions and shifts in a climate-vegetation feedback model.

    PubMed

    Alexandrov, Dmitri V; Bashkirtseva, Irina A; Ryashko, Lev B

    2018-04-01

    Motivated by the extremely important role of the Earth's vegetation dynamics in climate changes, we study the stochastic variability of a simple climate-vegetation system. In the case of deterministic dynamics, the system has one stable equilibrium and limit cycle or two stable equilibria corresponding to two opposite (cold and warm) climate-vegetation states. These states are divided by a separatrix going across a point of unstable equilibrium. Some possible stochastic scenarios caused by different externally induced natural and anthropogenic processes inherit properties of deterministic behaviour and drastically change the system dynamics. We demonstrate that the system transitions across its separatrix occur with increasing noise intensity. The climate-vegetation system therewith fluctuates, transits and localizes in the vicinity of its attractor. We show that this phenomenon occurs within some critical range of noise intensities. A noise-induced shift into the range of smaller global average temperatures corresponding to substantial oscillations of the Earth's vegetation cover is revealed. Our analysis demonstrates that the climate-vegetation interactions essentially contribute to climate dynamics and should be taken into account in more precise and complex models of climate variability.

  17. Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.

    2017-12-01

    The transport of fluids in porous media is dominated by flow­-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of the plume in two-dimensional problems.

  18. Stochastic Spectral Descent for Discrete Graphical Models

    DOE PAGES

    Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...

    2015-12-14

    Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less

  19. Stochastic dynamics for reinfection by transmitted diseases

    NASA Astrophysics Data System (ADS)

    Barros, Alessandro S.; Pinho, Suani T. R.

    2017-06-01

    The use of stochastic models to study the dynamics of infectious diseases is an important tool to understand the epidemiological process. For several directly transmitted diseases, reinfection is a relevant process, which can be expressed by endogenous reactivation of the pathogen or by exogenous reinfection due to direct contact with an infected individual (with smaller reinfection rate σ β than infection rate β ). In this paper, we examine the stochastic susceptible, infected, recovered, infected (SIRI) model simulating the endogenous reactivation by a spontaneous reaction, while exogenous reinfection by a catalytic reaction. Analyzing the mean-field approximations of a site and pairs of sites, and Monte Carlo (MC) simulations for the particular case of exogenous reinfection, we obtained continuous phase transitions involving endemic, epidemic, and no transmission phases for the simple approach; the approach of pairs is better to describe the phase transition from endemic phase (susceptible, infected, susceptible (SIS)-like model) to epidemic phase (susceptible, infected, and removed or recovered (SIR)-like model) considering the comparison with MC results; the reinfection increases the peaks of outbreaks until the system reaches endemic phase. For the particular case of endogenous reactivation, the approach of pairs leads to a continuous phase transition from endemic phase (SIS-like model) to no transmission phase. Finally, there is no phase transition when both effects are taken into account. We hope the results of this study can be generalized for the susceptible, exposed, infected, and removed or recovered (SEIRIE) model, for which the state exposed (infected but not infectious), describing more realistically transmitted diseases such as tuberculosis. In future work, we also intend to investigate the effect of network topology on phase transitions when the SIRI model describes both transmitted diseases (σ <1 ) and social contagions (σ >1 ).

  20. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.

  1. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  2. A study of the effect of space-dependent neutronics on stochastically-induced bifurcations in BWR dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Analytis, G.T.

    1995-09-01

    A non-linear one-group space-dependent neutronic model for a finite one-dimensional core is coupled with a simple BWR feed-back model. In agreement with results obtained by the authors who originally developed the point-kinetics version of this model, we shall show numerically that stochastic reactivity excitations may result in limit-cycles and eventually in a chaotic behaviour, depending on the magnitude of the feed-back coefficient K. In the framework of this simple space-dependent model, the effect of the non-linearities on the different spatial harmonics is studied and the importance of the space-dependent effects is exemplified and assessed in terms of the importance ofmore » the higher harmonics. It is shown that under certain conditions, when the limit-cycle-type develop, the neutron spectra may exhibit strong space-dependent effects.« less

  3. A Black-Scholes Approach to Satisfying the Demand in a Failure-Prone Manufacturing System

    NASA Technical Reports Server (NTRS)

    Chavez-Fuentes, Jorge R.; Gonzalex, Oscar R.; Gray, W. Steven

    2007-01-01

    The goal of this paper is to use a financial model and a hedging strategy in a systems application. In particular, the classical Black-Scholes model, which was developed in 1973 to find the fair price of a financial contract, is adapted to satisfy an uncertain demand in a manufacturing system when one of two production machines is unreliable. This financial model together with a hedging strategy are used to develop a closed formula for the production strategies of each machine. The strategy guarantees that the uncertain demand will be met in probability at the final time of the production process. It is assumed that the production efficiency of the unreliable machine can be modeled as a continuous-time stochastic process. Two simple examples illustrate the result.

  4. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  5. Weak limits of powers, simple spectrum of symmetric products, and rank-one mixing constructions

    NASA Astrophysics Data System (ADS)

    Ryzhikov, V. V.

    2007-06-01

    A class of automorphisms of the Lebesgue space such that their symmetric powers have simple spectrum is considered. In the framework of rank-one constructions mixing automorphisms with this property are constructed. The paper also contains results on weak limits, the local rank, and the spectral multiplicity of powers of automorphisms. Spectral properties of the stochastic Chacon automorphism are discussed.Bibliography: 23 titles.

  6. Stochastic Calculus and Differential Equations for Physics and Finance

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2013-02-01

    1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.

  7. Mean first-passage times of non-Markovian random walkers in confinement.

    PubMed

    Guérin, T; Levernier, N; Bénichou, O; Voituriez, R

    2016-06-16

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  8. Mean first-passage times of non-Markovian random walkers in confinement

    NASA Astrophysics Data System (ADS)

    Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.

    2016-06-01

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  9. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Treesearch

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  10. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  11. Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System

    NASA Astrophysics Data System (ADS)

    Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.

    2017-12-01

    An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.

  12. Quantum-field-theoretical approach to phase-space techniques: Generalizing the positive-P representation

    NASA Astrophysics Data System (ADS)

    Plimak, L. I.; Fleischhauer, M.; Olsen, M. K.; Collett, M. J.

    2003-01-01

    We present an introduction to phase-space techniques (PST) based on a quantum-field-theoretical (QFT) approach. In addition to bridging the gap between PST and QFT, our approach results in a number of generalizations of the PST. First, for problems where the usual PST do not result in a genuine Fokker-Planck equation (even after phase-space doubling) and hence fail to produce a stochastic differential equation (SDE), we show how the system in question may be approximated via stochastic difference equations (SΔE). Second, we show that introducing sources into the SDE’s (or SΔE’s) generalizes them to a full quantum nonlinear stochastic response problem (thus generalizing Kubo’s linear reaction theory to a quantum nonlinear stochastic response theory). Third, we establish general relations linking quantum response properties of the system in question to averages of operator products ordered in a way different from time normal. This extends PST to a much wider assemblage of operator products than are usually considered in phase-space approaches. In all cases, our approach yields a very simple and straightforward way of deriving stochastic equations in phase space.

  13. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  14. Stochastic associative memory

    NASA Astrophysics Data System (ADS)

    Baumann, Erwin W.; Williams, David L.

    1993-08-01

    Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.

  15. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  16. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.

  17. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  18. Fluctuation theorem: A critical review

    NASA Astrophysics Data System (ADS)

    Malek Mansour, M.; Baras, F.

    2017-10-01

    Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.

  19. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  20. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  1. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process.

    PubMed

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-27

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  2. Analyzing long-term correlated stochastic processes by means of recurrence networks: Potentials and pitfalls

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Donner, Reik V.; Kurths, Jürgen

    2015-02-01

    Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.

  3. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320

  4. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  5. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  6. On the origins of approximations for stochastic chemical kinetics.

    PubMed

    Haseltine, Eric L; Rawlings, James B

    2005-10-22

    This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.

  7. Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system

    NASA Astrophysics Data System (ADS)

    Keane, R. J.; Plant, R. S.; Tennant, W. J.

    2015-12-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  8. Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.

    2014-12-01

    High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.

  9. Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi

    2015-04-01

    The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.

  10. A stochastic model for density-dependent microwave Snow- and Graupel scattering coefficients of the NOAA JCSDA community radiative transfer model

    NASA Astrophysics Data System (ADS)

    Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.

    2018-05-01

    A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.

  11. A Numerical Method for the Simulation of Skew Brownian Motion and its Application to Diffusive Shock Acceleration of Charged Particles

    NASA Astrophysics Data System (ADS)

    McEvoy, Erica L.

    Stochastic differential equations are becoming a popular tool for modeling the transport and acceleration of cosmic rays in the heliosphere. In diffusive shock acceleration, cosmic rays diffuse across a region of discontinuity where the up- stream diffusion coefficient abruptly changes to the downstream value. Because the method of stochastic integration has not yet been developed to handle these types of discontinuities, I utilize methods and ideas from probability theory to develop a conceptual framework for the treatment of such discontinuities. Using this framework, I then produce some simple numerical algorithms that allow one to incorporate and simulate a variety of discontinuities (or boundary conditions) using stochastic integration. These algorithms were then modified to create a new algorithm which incorporates the discontinuous change in diffusion coefficient found in shock acceleration (known as Skew Brownian Motion). The originality of this algorithm lies in the fact that it is the first of its kind to be statistically exact, so that one obtains accuracy without the use of approximations (other than the machine precision error). I then apply this algorithm to model the problem of diffusive shock acceleration, modifying it to incorporate the additional effect of the discontinuous flow speed profile found at the shock. A steady-state solution is obtained that accurately simulates this phenomenon. This result represents a significant improvement over previous approximation algorithms, and will be useful for the simulation of discontinuous diffusion processes in other fields, such as biology and finance.

  12. Stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobczyk, K.

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less

  13. Relativistic analysis of stochastic kinematics

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.

  14. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  15. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  16. Evolution and mass extinctions as lognormal stochastic processes

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-10-01

    In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra-terrestrial civilizations existing in the Galaxy (as a consequence of the central limit theorem of statistics). (5) But the most striking new result is that the well-known `Molecular Clock of Evolution', namely the `constant rate of Evolution at the molecular level' as shown by Kimura's Neutral Theory of Molecular Evolution, identifies with growth rate of the entropy of our Evo-SETI model, because they both grew linearly in time since the origin of life. (6) Furthermore, we apply our Evo-SETI model to lognormal stochastic processes other than GBMs. For instance, we provide two models for the mass extinctions that occurred in the past: (a) one based on GBMs and (b) the other based on a parabolic mean value capable of covering both the extinction and the subsequent recovery of life forms. (7) Finally, we show that the Markov & Korotayev (2007, 2008) model for Darwinian Evolution identifies with an Evo-SETI model for which the mean value of the underlying lognormal stochastic process is a cubic function of the time. In conclusion: we have provided a new mathematical model capable of embracing molecular evolution, SETI and entropy into a simple set of statistical equations based upon b-lognormals and lognormal stochastic processes with arbitrary mean, of which the GBMs are the particular case of exponential growth.

  17. Dynamical stochastic processes of returns in financial markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Yoon, Seong-Min; Jung, Jae-Won; Kim, Kyungsik

    2007-03-01

    We study the evolution of probability distribution functions of returns, from the tick data of the Korean treasury bond (KTB) futures and the S&P 500 stock index, which can be described by means of the Fokker-Planck equation. We show that the Fokker-Planck equation and the Langevin equation from the estimated Kramers-Moyal coefficients can be estimated directly from the empirical data. By analyzing the statistics of the returns, we present quantitatively the deterministic and random influences on financial time series for both markets, for which we can give a simple physical interpretation. We particularly focus on the diffusion coefficient, which may be important for the creation of a portfolio.

  18. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  19. Ranging through Gabor logons-a consistent, hierarchical approach.

    PubMed

    Chang, C; Chatterjee, S

    1993-01-01

    In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.

  20. Stochastic recruitment leads to symmetry breaking in foraging populations

    NASA Astrophysics Data System (ADS)

    Biancalani, Tommaso; Dyson, Louise; McKane, Alan

    2014-03-01

    When an ant colony is faced with two identical equidistant food sources, the foraging ants are found to concentrate more on one source than the other. Analogous symmetry-breaking behaviours have been reported in various population systems, (such as queueing or stock market trading) suggesting the existence of a simple universal mechanism. Past studies have neglected the effect of demographic noise and required rather complicated models to qualitatively reproduce this behaviour. I will show how including the effects of demographic noise leads to a radically different conclusion. The symmetry-breaking arises solely due to the process of recruitment and ceases to occur for large population sizes. The latter fact provides a testable prediction for a real system.

  1. Modelling the evolution and diversity of cumulative culture

    PubMed Central

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  2. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  3. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  4. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  5. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  6. Some remarks on quantum physics, stochastic processes, and nonlinear filtering theory

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2016-05-01

    The mathematical similarities between quantum mechanics and stochastic processes has been studied in the literature. Some of the major results are reviewed, such as the relationship between the Fokker-Planck equation and the Schrödinger equation. Also reviewed are more recent results that show the mathematical similarities between quantum many particle systems and concepts in other areas of applied science, such as stochastic Petri nets. Some connections to filtering theory are discussed.

  7. Stochastic lattice model of synaptic membrane protein domains.

    PubMed

    Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A

    2017-05-01

    Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.

  8. Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong

    2014-05-30

    The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closedmore » solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.« less

  9. Selfish routing equilibrium in stochastic traffic network: A probability-dominant description.

    PubMed

    Zhang, Wenyi; He, Zhengbing; Guan, Wei; Ma, Rui

    2017-01-01

    This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers.

  10. Selfish routing equilibrium in stochastic traffic network: A probability-dominant description

    PubMed Central

    Zhang, Wenyi; Guan, Wei; Ma, Rui

    2017-01-01

    This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers. PMID:28829834

  11. Exploring information transmission in gene networks using stochastic simulation and machine learning

    NASA Astrophysics Data System (ADS)

    Park, Kyemyung; Prüstel, Thorsten; Lu, Yong; Narayanan, Manikandan; Martins, Andrew; Tsang, John

    How gene regulatory networks operate robustly despite environmental fluctuations and biochemical noise is a fundamental question in biology. Mathematically the stochastic dynamics of a gene regulatory network can be modeled using chemical master equation (CME), but nonlinearity and other challenges render analytical solutions of CMEs difficult to attain. While approaches of approximation and stochastic simulation have been devised for simple models, obtaining a more global picture of a system's behaviors in high-dimensional parameter space without simplifying the system substantially remains a major challenge. Here we present a new framework for understanding and predicting the behaviors of gene regulatory networks in the context of information transmission among genes. Our approach uses stochastic simulation of the network followed by machine learning of the mapping between model parameters and network phenotypes such as information transmission behavior. We also devised ways to visualize high-dimensional phase spaces in intuitive and informative manners. We applied our approach to several gene regulatory circuit motifs, including both feedback and feedforward loops, to reveal underexplored aspects of their operational behaviors. This work is supported by the Intramural Program of NIAID/NIH.

  12. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  13. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  14. Markovian limit for a reduced operation-valued stochastic process

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto

    1987-04-01

    Operation-valued stochastic processes give a formalization of the concept of continuous (in time) measurements in quantum mechanics. In this article, a first stage M of a measuring apparatus coupled to the system S is explicitly introduced, and continuous measurement of some observables of M is considered (one can speak of an indirect continuous measurement on S). When the degrees of freedom of the measuring apparatus M are eliminated and the weak coupling limit is taken, it is shown that an operation-valued stochastic process describing a direct continuous observation of the system S is obtained.

  15. Models for interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.

  16. Anomalous diffusion and scaling in coupled stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bel, Golan; Nemenman, Ilya

    2009-01-01

    Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin processes with the friction coefficient depending on the state of a similar, unobserved, process. Integrating out the latter, we derive the Pocker-Planck the friction coefficient of the first depends on the state of the second. Integrating out the latter, we derive the Focker-Planck equation for the probability distribution of the former. This has the fonn of diffusion equation with time-dependent diffusion coefficient, resulting in an anomalous diffusion. The diffusion exponent can not be predicted using a simple scaling argument, and anomalous scaling appears as well. Themore » diffusion exponent of the Weiss-Havlin comb model is derived as a special case, and the same exponent holds even for weakly coupled processes. We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems with unobserved dynamical degrees of freedom by means of standandard, diffusive Langevin descritpion.« less

  17. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  18. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  19. Bayesian Estimation and Inference Using Stochastic Electronics

    PubMed Central

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  20. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  1. Recursively constructing analytic expressions for equilibrium distributions of stochastic biochemical reaction networks.

    PubMed

    Meng, X Flora; Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M

    2017-05-01

    Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. © 2017 The Author(s).

  2. Stochastic simulation of biological reactions, and its applications for studying actin polymerization.

    PubMed

    Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru

    2010-11-30

    Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.

  3. Recursively constructing analytic expressions for equilibrium distributions of stochastic biochemical reaction networks

    PubMed Central

    Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M.

    2017-01-01

    Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. PMID:28566513

  4. Sparse Learning with Stochastic Composite Optimization.

    PubMed

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  5. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  6. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  7. Stochastic Models for Precipitable Water in Convection

    NASA Astrophysics Data System (ADS)

    Leung, Kimberly

    Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends. Results indicate that simulation of a highly volatile time series of observed PWV is certainly achievable and has potential to improve prediction capabilities in climate modeling. Further, this study demonstrates the feasibility of adding a mathematics- and statistics-based stochastic scheme to an existing deterministic parameterization to simulate observed fractal behavior.

  8. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  9. Stochastic Models for Precipitable Water in Convection

    NASA Astrophysics Data System (ADS)

    Leung, Kimberly

    Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends.

  10. The critical domain size of stochastic population models.

    PubMed

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  11. Time-ordered product expansions for computational stochastic system biology.

    PubMed

    Mjolsness, Eric

    2013-06-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.

  12. The Two-On-One Stochastic Duel

    DTIC Science & Technology

    1983-12-01

    ACN 67500 TRASANA-TR-43-83 (.0 (v THE TWO-ON-ONE STOCHASTIC DUEL I • Prepared By A.V. Gafarian C.J. Ancker, Jr. DECEMBER 19833D I°"’" " TIC ELECTE...83 M A IL / _ _ 4. TITLE (and Subtitle) TYPE OF REPORT & PERIOD CO\\,ERED The Two-On-One Stochastic Duel Final Report 6. PERFORMING ORG. REPORT NUMBER...Stochastic Duels , Stochastic Processed, and Attrition. 5-14cIa~c fal roLCS-e ss 120. ABSTRACT (C’ntfMte am reverse Ed& if necesemay and idemtitf by block

  13. Switchable genetic oscillator operating in quasi-stable mode

    PubMed Central

    Strelkowa, Natalja; Barahona, Mauricio

    2010-01-01

    Ring topologies of repressing genes have qualitatively different long-term dynamics if the number of genes is odd (they oscillate) or even (they exhibit bistability). However, these attractors may not fully explain the observed behaviour in transient and stochastic environments such as the cell. We show here that even repressilators possess quasi-stable, travelling wave periodic solutions that are reachable, long-lived and robust to parameter changes. These solutions underlie the sustained oscillations observed in even rings in the stochastic regime, even if these circuits are expected to behave as switches. The existence of such solutions can also be exploited for control purposes: operation of the system around the quasi-stable orbit allows us to turn on and off the oscillations reliably and on demand. We illustrate these ideas with a simple protocol based on optical interference that can induce oscillations robustly both in the stochastic and deterministic regimes. PMID:20097721

  14. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

    PubMed

    Iyer-Biswas, Srividya; Jayaprakash, C

    2014-11-01

    In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

  16. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    NASA Astrophysics Data System (ADS)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  17. Stochastic sensitivity measure for mistuned high-performance turbines

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Pierre, Christophe

    1992-01-01

    A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.

  18. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less

  19. Interplay between social debate and propaganda in an opinion formation model

    NASA Astrophysics Data System (ADS)

    Gimenez, M. C.; Revelli, J. A.; Lama, M. S. de la; Lopez, J. M.; Wio, H. S.

    2013-01-01

    We introduce a simple model of opinion dynamics in which a two-state agent modified Sznajd model evolves due to the simultaneous action of stochastic driving and a periodic signal. The stochastic effect mimics a social temperature, so the agents may adopt decisions in support for or against some opinion or position, according to a modified Sznajd rule with a varying probability. The external force represents a simplified picture by which society feels the influence of the external effects of propaganda. By means of Monte Carlo simulations we have shown the dynamical interplay between the social condition or mood and the external influence, finding a stochastic resonance-like phenomenon when we depict the noise-to-signal ratio as a function of the social temperature. In addition, we have also studied the effects of the system size and the external signal strength on the opinion formation dynamics.

  20. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  1. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  2. Seasonally forced disease dynamics explored as switching between attractors

    NASA Astrophysics Data System (ADS)

    Keeling, Matt J.; Rohani, Pejman; Grenfell, Bryan T.

    2001-01-01

    Biological phenomena offer a rich diversity of problems that can be understood using mathematical techniques. Three key features common to many biological systems are temporal forcing, stochasticity and nonlinearity. Here, using simple disease models compared to data, we examine how these three factors interact to produce a range of complicated dynamics. The study of disease dynamics has been amongst the most theoretically developed areas of mathematical biology; simple models have been highly successful in explaining the dynamics of a wide variety of diseases. Models of childhood diseases incorporate seasonal variation in contact rates due to the increased mixing during school terms compared to school holidays. This ‘binary’ nature of the seasonal forcing results in dynamics that can be explained as switching between two nonlinear spiral sinks. Finally, we consider the stability of the attractors to understand the interaction between the deterministic dynamics and demographic and environmental stochasticity. Throughout attention is focused on the behaviour of measles, whooping cough and rubella.

  3. Nonequilibrium Langevin dynamics: A demonstration study of shear flow fluctuations in a simple fluid

    NASA Astrophysics Data System (ADS)

    Belousov, Roman; Cohen, E. G. D.; Rondoni, Lamberto

    2017-08-01

    The present paper is based on a recent success of the second-order stochastic fluctuation theory in describing time autocorrelations of equilibrium and nonequilibrium physical systems. In particular, it was shown to yield values of the related deterministic parameters of the Langevin equation for a Couette flow in a microscopic molecular dynamics model of a simple fluid. In this paper we find all the remaining constants of the stochastic dynamics, which then is simulated numerically and compared directly with the original physical system. By using these data, we study in detail the accuracy and precision of a second-order Langevin model for nonequilibrium physical systems theoretically and computationally. We find an intriguing relation between an applied external force and cumulants of the resulting flow fluctuations. This is characterized by a linear dependence of an athermal cumulant ratio, an apposite quantity introduced here. In addition, we discuss how the order of a given Langevin dynamics can be raised systematically by introducing colored noise.

  4. Derivation of kinetic equations from non-Wiener stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Basharov, A. M.

    2013-12-01

    Kinetic differential-difference equations containing terms with fractional derivatives and describing α -stable Levy processes with 0 < α < 1 have been derived in a unified manner in terms of one-dimensional stochastic differential equations controlled merely by the Poisson processes.

  5. Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Ross, Brian; Imada, Janine

    Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.

  6. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    PubMed

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  7. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  8. Photoresist and stochastic modeling

    NASA Astrophysics Data System (ADS)

    Hansen, Steven G.

    2018-01-01

    Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.

  9. Relaxation and coarsening of weakly-interacting breathers in a simplified DNLS chain

    NASA Astrophysics Data System (ADS)

    Iubini, Stefano; Politi, Antonio; Politi, Paolo

    2017-07-01

    The discrete nonlinear Schrödinger (DNLS) equation displays a parameter region characterized by the presence of localized excitations (breathers). While their formation is well understood and it is expected that the asymptotic configuration comprises a single breather on top of a background, it is not clear why the dynamics of a multi-breather configuration is essentially frozen. In order to investigate this question, we introduce simple stochastic models, characterized by suitable conservation laws. We focus on the role of the coupling strength between localized excitations and background. In the DNLS model, higher breathers interact more weakly, as a result of their faster rotation. In our stochastic models, the strength of the coupling is controlled directly by an amplitude-dependent parameter. In the case of a power-law decrease, the associated coarsening process undergoes a slowing down if the decay rate is larger than a critical value. In the case of an exponential decrease, a freezing effect is observed that is reminiscent of the scenario observed in the DNLS. This last regime arises spontaneously when direct energy diffusion between breathers and background is blocked below a certain threshold.

  10. Stochastic model for gene transcription on Drosophila melanogaster embryos

    NASA Astrophysics Data System (ADS)

    Prata, Guilherme N.; Hornos, José Eduardo M.; Ramos, Alexandre F.

    2016-02-01

    We examine immunostaining experimental data for the formation of stripe 2 of even-skipped (eve) transcripts on D. melanogaster embryos. An estimate of the factor converting immunofluorescence intensity units into molecular numbers is given. The analysis of the eve dynamics at the region of stripe 2 suggests that the promoter site of the gene has two distinct regimes: an earlier phase when it is predominantly activated until a critical time when it becomes mainly repressed. That suggests proposing a stochastic binary model for gene transcription on D. melanogaster embryos. Our model has two random variables: the transcripts number and the state of the source of mRNAs given as active or repressed. We are able to reproduce available experimental data for the average number of transcripts. An analysis of the random fluctuations on the number of eves and their consequences on the spatial precision of stripe 2 is presented. We show that the position of the anterior or posterior borders fluctuate around their average position by ˜1 % of the embryo length, which is similar to what is found experimentally. The fitting of data by such a simple model suggests that it can be useful to understand the functions of randomness during developmental processes.

  11. A solvable model of Vlasov-kinetic plasma turbulence in Fourier-Hermite phase space

    NASA Astrophysics Data System (ADS)

    Adkins, T.; Schekochihin, A. A.

    2018-02-01

    A class of simple kinetic systems is considered, described by the one-dimensional Vlasov-Landau equation with Poisson or Boltzmann electrostatic response and an energy source. Assuming a stochastic electric field, a solvable model is constructed for the phase-space turbulence of the particle distribution. The model is a kinetic analogue of the Kraichnan-Batchelor model of chaotic advection. The solution of the model is found in Fourier-Hermite space and shows that the free-energy flux from low to high Hermite moments is suppressed, with phase mixing cancelled on average by anti-phase-mixing (stochastic plasma echo). This implies that Landau damping is an ineffective route to dissipation (i.e. to thermalisation of electric energy via velocity space). The full Fourier-Hermite spectrum is derived. Its asymptotics are -3/2$ at low wavenumbers and high Hermite moments ( ) and -1/2k-2$ at low Hermite moments and high wavenumbers ( ). These conclusions hold at wavenumbers below a certain cutoff (analogue of Kolmogorov scale), which increases with the amplitude of the stochastic electric field and scales as inverse square of the collision rate. The energy distribution and flows in phase space are a simple and, therefore, useful example of competition between phase mixing and nonlinear dynamics in kinetic turbulence, reminiscent of more realistic but more complicated multi-dimensional systems that have not so far been amenable to complete analytical solution.

  12. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  13. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  14. Stochastic evolutionary voluntary public goods game with punishment in a Quasi-birth-and-death process.

    PubMed

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2017-11-23

    Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.

  15. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  16. Stochastic dynamics and stable equilibrium of evolutionary optional public goods game in finite populations

    NASA Astrophysics Data System (ADS)

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2018-07-01

    Continuous noise caused by mutation is widely present in evolutionary systems. Considering the noise effects and under the optional participation mechanism, a stochastic model for evolutionary public goods game in a finite size population is established. The evolutionary process of strategies in the population is described as a multidimensional ergodic and continuous time Markov process. The stochastic stable state of the system is analyzed by the limit distribution of the stochastic process. By numerical experiments, the influences of the fixed income coefficient for non-participants and the investment income coefficient of the public goods on the stochastic stable equilibrium of the system are analyzed. Through the numerical calculation results, we found that the optional participation mechanism can change the evolutionary dynamics and the equilibrium of the public goods game, and there is a range of parameters which can effectively promote the evolution of cooperation. Further, we obtain the accurate quantitative relationship between the parameters and the probabilities for the system to choose different stable equilibriums, which can be used to realize the control of cooperation.

  17. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  18. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    NASA Astrophysics Data System (ADS)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  19. Research in Stochastic Processes.

    DTIC Science & Technology

    1982-10-31

    Office of Scientific Research Grant AFOSR F49620 82 C 0009 Period: 1 Noveber 1981 through 31 October 1982 Title: Research in Stochastic Processes Co...STA4ATIS CAMBANIS The work briefly described here was developed in connection with problems arising from and related to the statistical comunication

  20. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  1. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  2. Kinetic theory of age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Greenman, Chris D.; Chou, Tom

    2016-01-01

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but are unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Stochastic theories that treat semi-Markov age-dependent processes using, e.g., the Bellman-Harris equation do not resolve a population's age structure and are unable to quantify population-size dependencies. Conversely, current theories that include size-dependent population dynamics (e.g., mathematical models that include carrying capacity such as the logistic equation) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new, fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a Bogoliubov--Born--Green--Kirkwood--Yvon-like hierarchy. Explicit solutions are derived in three limits: no birth, no death, and steady state. These are then compared with their corresponding mean-field results. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution.

  3. Research in Stochastic Processes

    DTIC Science & Technology

    1988-08-31

    stationary sequence, Stochastic Proc. Appl. 29, 1988, 155-169 T. Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary...Nandagopalan, On exceedance point processes for "regular" sample functions, Proc. Volume, Oberxolfach Conf. on Extreme Value Theory, J. Husler and R. Reiss...exceedance point processes for stationary sequences under mild oscillation restrictions, Apr. 88. Obermotfach Conf. on Extremal Value Theory. Ed. J. HUsler

  4. Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model

    NASA Astrophysics Data System (ADS)

    Alpert, Peter A.; Knopf, Daniel A.

    2016-02-01

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.

  5. Stochastic epidemic outbreaks: why epidemics are like lasers

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira B.; Billings, Lora

    2004-05-01

    Many diseases, such as childhood diseases, dengue fever, and West Nile virus, appear to oscillate randomly as a function of seasonal environmental or social changes. Such oscillations appear to have a chaotic bursting character, although it is still uncertain how much is due to random fluctuations. Such bursting in the presence of noise is also observed in driven lasers. In this talk, I will show how noise can excite random outbreaks in simple models of seasonally driven outbreaks, as well as lasers. The models for both population dynamics will be shown to share the same class of underlying topology, which plays a major role in the cause of observed stochastic bursting.

  6. Persistence length measurements from stochastic single-microtubule trajectories.

    PubMed

    van den Heuvel, M G L; Bolhuis, S; Dekker, C

    2007-10-01

    We present a simple method to determine the persistence length of short submicrometer microtubule ends from their stochastic trajectories on kinesin-coated surfaces. The tangent angle of a microtubule trajectory is similar to a random walk, which is solely determined by the stiffness of the leading tip and the velocity of the microtubule. We demonstrate that even a single-microtubule trajectory suffices to obtain a reliable value of the persistence length. We do this by calculating the variance in the tangent trajectory angle of an individual microtubule. By averaging over many individual microtubule trajectories, we find that the persistence length of microtubule tips is 0.24 +/- 0.03 mm.

  7. A fixed-memory moving, expanding window for obtaining scatter corrections in X-ray CT and other stochastic averages

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.; Pintar, Adam L.

    2015-11-01

    A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.

  8. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  9. Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.

    DTIC Science & Technology

    1980-06-01

    linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba

  10. Partial Ordering and Stochastic Resonance in Discrete Memoryless Channels

    DTIC Science & Technology

    2012-05-01

    Methods for Underwater Wireless Sensor Networks”, which is to analyze and develop noncoherent communication methods at the physical layer for target...Capacity Behavior for Simple Models of Optical Fiber Communication,” 8 th International conf. on Communications, COMM 2010, Bucharest, pp.1-6, July 2010

  11. Online POMDP Algorithms for Very Large Observation Spaces

    DTIC Science & Technology

    2017-06-06

    stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015

  12. An Analysis of Stochastic Duels Involving Fixed Rates of Fire

    DTIC Science & Technology

    The thesis presents an analysis of stochastic duels involving two opposing weapon systems with constant rates of fire. The duel was developed as a...process stochastic duels . The analysis was then extended to the two versus one duel where the three weapon systems were assumed to have fixed rates of fire.

  13. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.

    PubMed

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K

    2011-04-15

    The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.

  14. A kinetic theory for age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Chou, Tom; Greenman, Chris

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but they are structurally unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Conversely, current theories that include size-dependent population dynamics (e.g., carrying capacity) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a BBGKY-like hierarchy. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution. NSF.

  15. Chaotic Expansions of Elements of the Universal Enveloping Superalgebra Associated with a Z2-graded Quantum Stochastic Calculus

    NASA Astrophysics Data System (ADS)

    Eyre, T. M. W.

    Given a polynomial function f of classical stochastic integrator processes whose differentials satisfy a closed Ito multiplication table, we can express the stochastic derivative of f as We establish an analogue of this formula in the form of a chaotic decomposition for Z2-graded theories of quantum stochastic calculus based on the natural coalgebra structure of the universal enveloping superalgebra.

  16. Stochastic dynamics of melt ponds and sea ice-albedo climate feedback

    NASA Astrophysics Data System (ADS)

    Sudakov, Ivan

    Evolution of melt ponds on the Arctic sea surface is a complicated stochastic process. We suggest a low-order model with ice-albedo feedback which describes stochastic dynamics of melt ponds geometrical characteristics. The model is a stochastic dynamical system model of energy balance in the climate system. We describe the equilibria in this model. We conclude the transition in fractal dimension of melt ponds affects the shape of the sea ice albedo curve.

  17. Effects of Stochastic Traffic Flow Model on Expected System Performance

    DTIC Science & Technology

    2012-12-01

    NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs

  18. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems such as the dynamical description of quantum phase transition (local- ization) and the numerical stability of the trace-conserving, nonlinear stochastic Liouville equation are outlined.

  19. Inter-species competition-facilitation in stochastic riparian vegetation dynamics.

    PubMed

    Tealdi, Stefano; Camporeale, Carlo; Ridolfi, Luca

    2013-02-07

    Riparian vegetation is a highly dynamic community that lives on river banks and which depends to a great extent on the fluvial hydrology. The stochasticity of the discharge and erosion/deposition processes in fact play a key role in determining the distribution of vegetation along a riparian transect. These abiotic processes interact with biotic competition/facilitation mechanisms, such as plant competition for light, water, and nutrients. In this work, we focus on the dynamics of plants characterized by three components: (1) stochastic forcing due to river discharges, (2) competition for resources, and (3) inter-species facilitation due to the interplay between vegetation and fluid dynamics processes. A minimalist stochastic bio-hydrological model is proposed for the dynamics of the biomass of two vegetation species: one species is assumed dominant and slow-growing, the other is subdominant, but fast-growing. The stochastic model is solved analytically and the probability density function of the plant biomasses is obtained as a function of both the hydrologic and biologic parameters. The impact of the competition/facilitation processes on the distribution of vegetation species along the riparian transect is investigated and remarkable effects are observed. Finally, a good qualitative agreement is found between the model results and field data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Ordering kinetics in the long-period superlattice alloy Cu0.79 Pd0.21

    NASA Astrophysics Data System (ADS)

    Wang, X.; Mainville, J.; Ludwig, K.; Flament, X.; Finel, A.; Caudron, R.

    2005-07-01

    The kinetics of long-period superlattice (LPS) formation from the disordered state has been examined in a Cu0.79Pd0.21 alloy that exhibits a one-dimensional LPS ordered state. Time-resolved x-ray scattering shows that, following a rapid temperature quench from the disordered state into the LPS region of the phase diagram, the satellite peaks initially grow more quickly than do the central integer-order superlattice peaks. During this process, the satellite peak position, which is inversely related to the average modulation wavelength 2M , initially decreases rapidly, then reaches a minimum and relaxes slowly back toward its new equilibrium position. In the later stages of the LPS formation process, the satellite and central integer-order superlattice peaks narrow in a manner consistent with t1/2 domain coarsening. A simple stochastic model of the partially ordered structure was developed to better understand the relationships between peak widths.

  1. Mutation-selection equilibrium in games with multiple strategies.

    PubMed

    Antal, Tibor; Traulsen, Arne; Ohtsuki, Hisashi; Tarnita, Corina E; Nowak, Martin A

    2009-06-21

    In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among n strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/n, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of nxn games in the limit of weak selection.

  2. Tissue fusion over nonadhering surfaces

    PubMed Central

    Nier, Vincent; Deforet, Maxime; Duclos, Guillaume; Yevick, Hannah G.; Cochet-Escartin, Olivier; Marcq, Philippe; Silberzan, Pascal

    2015-01-01

    Tissue fusion eliminates physical voids in a tissue to form a continuous structure and is central to many processes in development and repair. Fusion events in vivo, particularly in embryonic development, often involve the purse-string contraction of a pluricellular actomyosin cable at the free edge. However, in vitro, adhesion of the cells to their substrate favors a closure mechanism mediated by lamellipodial protrusions, which has prevented a systematic study of the purse-string mechanism. Here, we show that monolayers can cover well-controlled mesoscopic nonadherent areas much larger than a cell size by purse-string closure and that active epithelial fluctuations are required for this process. We have formulated a simple stochastic model that includes purse-string contractility, tissue fluctuations, and effective friction to qualitatively and quantitatively account for the dynamics of closure. Our data suggest that, in vivo, tissue fusion adapts to the local environment by coordinating lamellipodial protrusions and purse-string contractions. PMID:26199417

  3. Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses

    PubMed Central

    Das, Jayajit

    2016-01-01

    Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894

  4. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process.

    PubMed

    Jahn, Patrick; Berg, Rune W; Hounsgaard, Jørn; Ditlevsen, Susanne

    2011-11-01

    Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein-Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein-Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.

  5. A Pumping Algorithm for Ergodic Stochastic Mean Payoff Games with Perfect Information

    NASA Astrophysics Data System (ADS)

    Boros, Endre; Elbassioni, Khaled; Gurvich, Vladimir; Makino, Kazuhisa

    In this paper, we consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = V B ∪ V W ∪ V R , E), with local rewards r: E to { R}, and three types of vertices: black V B , white V W , and random V R . The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v,u). When the play is at a random vertex v, a vertex u is picked with the given probability p(v,u). In all cases, Black pays White the value r(v,u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [7] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG's), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the optimal strategies of both players and the value for every initial position are obvious, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Let us add that the general (non-ergodic) case is at least as hard as SSG's, for which no pseudo-polynomial algorithm is known.

  6. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  7. Mechanical properties of transription

    NASA Astrophysics Data System (ADS)

    Sevier, Stuart; Levine, Herbert

    Over the last several decades it has been increasingly recognized that both stochastic and mechanical processes play a central role in transcription. Though many aspects have been explained a number of fundamental properties are undeveloped. Recent results have pointed to mechanical feedback as the source of transcriptional bursting and DNA supercoiling but a reconciliation of this perspective with preexisting views of transcriptional is lacking. In this work we present a simple model of transcription where RNA elongation, RNA polymerase rotation and DNA supercoiling are coupled. The mechanical properties of each object form a foundational framework for understanding the physical nature of transcription. The resulting model can explain several important aspects of chromatin structure and generates a number of predictions for the mechanical properties of transcription.

  8. Diffusion in randomly perturbed dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer

    2014-11-01

    Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.

  9. Accuracy of binding mode prediction with a cascadic stochastic tunneling method.

    PubMed

    Fischer, Bernhard; Basili, Serena; Merlitz, Holger; Wenzel, Wolfgang

    2007-07-01

    We investigate the accuracy of the binding modes predicted for 83 complexes of the high-resolution subset of the ASTEX/CCDC receptor-ligand database using the atomistic FlexScreen approach with a simple forcefield-based scoring function. The median RMS deviation between experimental and predicted binding mode was just 0.83 A. Over 80% of the ligands dock within 2 A of the experimental binding mode, for 60 complexes the docking protocol locates the correct binding mode in all of ten independent simulations. Most docking failures arise because (a) the experimental structure clashed in our forcefield and is thus unattainable in the docking process or (b) because the ligand is stabilized by crystal water. 2007 Wiley-Liss, Inc.

  10. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  11. Modeling stochasticity and robustness in gene regulatory networks.

    PubMed

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  12. O the Derivation of the Schroedinger Equation from Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Wallstrom, Timothy Clarke

    The thesis is divided into four largely independent chapters. The first three chapters treat mathematical problems in the theory of stochastic mechanics. The fourth chapter deals with stochastic mechanisms as a physical theory and shows that the Schrodinger equation cannot be derived from existing formulations of stochastic mechanics, as had previously been believed. Since the drift coefficients of stochastic mechanical diffusions are undefined on the nodes, or zeros of the density, an important problem has been to show that the sample paths stay away from the nodes. In Chapter 1, it is shown that for a smooth wavefunction, the closest approach to the nodes can be bounded solely in terms of the time -integrated energy. The ergodic properties of stochastic mechanical diffusions are greatly complicated by the tendency of the particles to avoid the nodes. In Chapter 2, it is shown that a sufficient condition for a stationary process to be ergodic is that there exist positive t and c such that for all x and y, p^{t} (x,y) > cp(y), and this result is applied to show that the set of spin-1over2 diffusions is uniformly ergodic. In stochastic mechanics, the Bopp-Haag-Dankel diffusions on IR^3times SO(3) are used to represent particles with spin. Nelson has conjectured that in the limit as the particle's moment of inertia I goes to zero, the projections of the Bopp -Haag-Dankel diffusions onto IR^3 converge to a Markovian limit process. This conjecture is proved for the spin-1over2 case in Chapter 3, and the limit process identified as the diffusion naturally associated with the solution to the regular Pauli equation. In Chapter 4 it is shown that the general solution of the stochastic Newton equation does not correspond to a solution of the Schrodinger equation, and that there are solutions to the Schrodinger equation which do not satisfy the Guerra-Morato Lagrangian variational principle. These observations are shown to apply equally to other existing formulations of stochastic mechanics, and it is argued that these difficulties represent fundamental inadequacies in the physical foundation of stochastic mechanics.

  13. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    NASA Astrophysics Data System (ADS)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased regional diversity after river capture. The results indicate that the mode of speciation and the rate of speciation relative to the rate of divide migration determine the evolutionary signature of river capture.

  14. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  15. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  16. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  17. Conserving the linear momentum in stochastic dynamics: Dissipative particle dynamics as a general strategy to achieve local thermostatization in molecular dynamics simulations.

    PubMed

    Passler, Peter P; Hofer, Thomas S

    2017-02-15

    Stochastic dynamics is a widely employed strategy to achieve local thermostatization in molecular dynamics simulation studies; however, it suffers from an inherent violation of momentum conservation. Although this short-coming has little impact on structural and short-time dynamic properties, it can be shown that dynamics in the long-time limit such as diffusion is strongly dependent on the respective thermostat setting. Application of the methodically similar dissipative particle dynamics (DPD) provides a simple, effective strategy to ensure the advantages of local, stochastic thermostatization while at the same time the linear momentum of the system remains conserved. In this work, the key parameters to employ the DPD thermostats in the framework of periodic boundary conditions are investigated, in particular the dependence of the system properties on the size of the DPD-region as well as the treatment of forces near the cutoff. Structural and dynamical data for light and heavy water as well as a Lennard-Jones fluid have been compared to simulations executed via stochastic dynamics as well as via use of the widely employed Nose-Hoover chain and Berendsen thermostats. It is demonstrated that a small size of the DPD region is sufficient to achieve local thermalization, while at the same time artifacts in the self-diffusion characteristic for stochastic dynamics are eliminated. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  19. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  20. Tuning stochastic transition rates in a bistable genetic network.

    NASA Astrophysics Data System (ADS)

    Chickarmane, Vijay; Peterson, Carsten

    2009-03-01

    We investigate the stochastic dynamics of a simple genetic network, a toggle switch, in which the system makes transitions between the two alternative states. Our interest is in exploring whether such stochastic transitions, which occur due to the intrinsic noise such as transcriptional and degradation events, can be slowed down/speeded up, without changing the mean expression levels of the two genes, which comprise the toggle network. Such tuning is achieved by linking a signaling network to the toggle switch. The signaling network comprises of a protein, which can exist either in an active (phosphorylated) or inactive (dephosphorylated) form, and where its state is determined by one of the genetic network components. The active form of the protein in turn feeds back on the dynamics of the genetic network. We find that the rate of stochastic transitions from one state to the other, is determined essentially by the speed of phosphorylation, and hence the rate can be modulated by varying the phosphatase levels. We hypothesize that such a network architecture can be implemented as a general mechanism for controlling transition rates and discuss applications in population studies of two differentiated cell lineages, ex: the myeloid/erythroid lineage in hematopoiesis.

  1. A Mapping Model for Magnetic Fields with q-profile Variations Typical of Internal Transport Barrier Experiments

    NASA Astrophysics Data System (ADS)

    Rapoport, B. I.; Pavlenko, I.; Weyssow, B.; Carati, D.

    2002-11-01

    Recent studies of ion and electron transport indicate that the safety factor profile, q(r), affects internal transport barrier (ITB) formation in magnetic confinement devices [1, 2]. These studies are consistent with experimental observations that low shear suppresses magnetic island interaction and associated stochasticity when the ITB is formed [3]. In this sense the position and quality of the ITB depend on the stochasticity of the magnetic field, and can be controlled by q(r). This study explores effects of the q-profile on magnetic field stochasticity using two-dimensional mapping techniques. Q-profiles typical of ITB experiments are incorporated into Hamiltonian maps to investigate the relation between magnetic field stochasticity and ITB parameters predicted by other models. It is shown that the mapping technique generates results consistent with these predictions, and suggested that Hamiltonian mappings can be useful as simple and computationally inexpensive approximation methods for describing the magnetic field in ITB experiments. 1. I. Voitsekhovitch et al. 29th EPS Conference on Plasma Physics and Controlled Fusion (2002). O-4.04. 2. G.M.D. Hogeweij et al. Nucl. Fusion. 38 (1998): 1881. 3. K.A. Razumova et al. Plasma Phys. Contr. Fusion. 42 (2000): 973.

  2. Characteristic effects of stochastic oscillatory forcing on neural firing: analytical theory and comparison to paddlefish electroreceptor data.

    PubMed

    Bauermeister, Christoph; Schwalger, Tilo; Russell, David F; Neiman, Alexander B; Lindner, Benjamin

    2013-01-01

    Stochastic signals with pronounced oscillatory components are frequently encountered in neural systems. Input currents to a neuron in the form of stochastic oscillations could be of exogenous origin, e.g. sensory input or synaptic input from a network rhythm. They shape spike firing statistics in a characteristic way, which we explore theoretically in this report. We consider a perfect integrate-and-fire neuron that is stimulated by a constant base current (to drive regular spontaneous firing), along with Gaussian narrow-band noise (a simple example of stochastic oscillations), and a broadband noise. We derive expressions for the nth-order interval distribution, its variance, and the serial correlation coefficients of the interspike intervals (ISIs) and confirm these analytical results by computer simulations. The theory is then applied to experimental data from electroreceptors of paddlefish, which have two distinct types of internal noisy oscillators, one forcing the other. The theory provides an analytical description of their afferent spiking statistics during spontaneous firing, and replicates a pronounced dependence of ISI serial correlation coefficients on the relative frequency of the driving oscillations, and furthermore allows extraction of certain parameters of the intrinsic oscillators embedded in these electroreceptors.

  3. Image analysis methods for assessing levels of image plane nonuniformity and stochastic noise in a magnetic resonance image of a homogeneous phantom.

    PubMed

    Magnusson, P; Olsson, L E

    2000-08-01

    Magnetic response image plane nonuniformity and stochastic noise are properties that greatly influence the outcome of quantitative magnetic resonance imaging (MRI) evaluations such as gel dosimetry measurements using MRI. To study these properties, robust and accurate image analysis methods are required. New nonuniformity level assessment methods were designed, since previous methods were found to be insufficiently robust and accurate. The new and previously reported nonuniformity level assessment methods were analyzed with respect to, for example, insensitivity to stochastic noise; and previously reported stochastic noise level assessment methods with respect to insensitivity to nonuniformity. Using the same image data, different methods were found to assess significantly different levels of nonuniformity. Nonuniformity levels obtained using methods that count pixels in an intensity interval, and obtained using methods that use only intensity values, were found not to be comparable. The latter were found preferable, since they assess the quantity intrinsically sought. A new method which calculates a deviation image, with every pixel representing the deviation from a reference intensity, was least sensitive to stochastic noise. Furthermore, unlike any other analyzed method, it includes all intensity variations across the phantom area and allows for studies of nonuniformity shapes. This new method was designed for accurate studies of nonuniformities in gel dosimetry measurements, but could also be used with benefit in quality assurance and acceptance testing of MRI, scintillation camera, and computer tomography systems. The stochastic noise level was found to be greatly method dependent. Two methods were found to be insensitive to nonuniformity and also simple to use in practice. One method assesses the stochastic noise level as the average of the levels at five different positions within the phantom area, and the other assesses the stochastic noise in a region outside the phantom area.

  4. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1

  5. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks

    PubMed Central

    Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.

    2015-01-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406

  6. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.

    PubMed

    Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M

    2015-09-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.

  7. Stochastic flow shop scheduling of overlapping jobs on tandem machines in application to optimizing the US Army's deliberate nuclear, biological, and chemical decontamination process, (final report). Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, V.

    1991-05-01

    The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less

  8. Unified picture of strong-coupling stochastic thermodynamics and time reversals

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    Strong-coupling statistical thermodynamics is formulated as the Hamiltonian dynamics of an observed system interacting with another unobserved system (a bath). It is shown that the entropy production functional of stochastic thermodynamics, defined as the log ratio of forward and backward system path probabilities, is in a one-to-one relation with the log ratios of the joint initial conditions of the system and the bath. A version of strong-coupling statistical thermodynamics where the system-bath interaction vanishes at the beginning and at the end of a process is, as is also weak-coupling stochastic thermodynamics, related to the bath initially in equilibrium by itself. The heat is then the change of bath energy over the process, and it is discussed when this heat is a functional of the system history alone. The version of strong-coupling statistical thermodynamics introduced by Seifert and Jarzynski is related to the bath initially in conditional equilibrium with respect to the system. This leads to heat as another functional of the system history which needs to be determined by thermodynamic integration. The log ratio of forward and backward system path probabilities in a stochastic process is finally related to log ratios of the initial conditions of a combined system and bath. It is shown that the entropy production formulas of stochastic processes under a general class of time reversals are given by the differences of bath energies in a larger underlying Hamiltonian system. The paper highlights the centrality of time reversal in stochastic thermodynamics, also in the case of strong coupling.

  9. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  10. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  11. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  12. Valuation of Capabilities and System Architecture Options to Meet Affordability Requirement

    DTIC Science & Technology

    2014-04-30

    is an extension of the historic volatility and trend of the stock using Brownian motion . In finance , the Black-Scholes equation is used to value...the underlying asset whose value is modeled as a stochastic process. In finance , the underlying asset is a tradeable stock and the stochastic process

  13. On a Result for Finite Markov Chains

    ERIC Educational Resources Information Center

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  14. Stochastic resonance effects reveal the neural mechanisms of transcranial magnetic stimulation

    PubMed Central

    Schwarzkopf, Dietrich Samuel; Silvanto, Juha; Rees, Geraint

    2011-01-01

    Transcranial magnetic stimulation (TMS) is a popular method for studying causal relationships between neural activity and behavior. However its mode of action remains controversial, and so far there is no framework to explain its wide range of facilitatory and inhibitory behavioral effects. While some theoretical accounts suggests that TMS suppresses neuronal processing, other competing accounts propose that the effects of TMS result from the addition of noise to neuronal processing. Here we exploited the stochastic resonance phenomenon to distinguish these theoretical accounts and determine how TMS affects neuronal processing. Specifically, we showed that online TMS can induce stochastic resonance in the human brain. At low intensity, TMS facilitated the detection of weak motion signals but with higher TMS intensities and stronger motion signals we found only impairment in detection. These findings suggest that TMS acts by adding noise to neuronal processing, at least in an online TMS protocol. Importantly, such stochastic resonance effects may also explain why TMS parameters that under normal circumstances impair behavior, can induce behavioral facilitations when the stimulated area is in an adapted or suppressed state. PMID:21368025

  15. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  16. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  17. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Guang J.

    2016-09-29

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  18. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Guang J.

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  19. Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages

    PubMed Central

    Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.

    2012-01-01

    Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726

  20. Finite-Size Scaling Analysis of Binary Stochastic Processes and Universality Classes of Information Cascade Phase Transition

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hisakado, Masato

    2015-05-01

    We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.

  1. Influence of stochastic geometric imperfections on the load-carrying behaviour of thin-walled structures using constrained random fields

    NASA Astrophysics Data System (ADS)

    Lauterbach, S.; Fina, M.; Wagner, W.

    2018-04-01

    Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.

  2. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  3. Partition-free approach to open quantum systems in harmonic environments: An exact stochastic Liouville equation

    NASA Astrophysics Data System (ADS)

    McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.

    2017-03-01

    We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.

  4. Mesoscopic description of random walks on combs

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner

    2015-12-01

    Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.

  5. Scaling view by the Virtual Nature Systems

    NASA Astrophysics Data System (ADS)

    Klenov, Valeriy

    2010-05-01

    The Actual Nature Systems (ANS) continually are under spatial-temporal governing external influences from other systems (Meteorology and Geophysics). This influences provide own spatial temporal patterns on the Earth Nature Systems, which reforms these influences by own manner and scales. These at last three systems belong to the Open Non Equilibrium Nature Systems (ONES). The Geophysics and Meteorology Systems are both governing for the ANS on the Earth. They provide as continual energetic pressure and impacts, and direct Extremes from the both systems to the ANS on Earth surface (earthquakes, storms, and others). The Geodynamics of the ANS is under mixing of influence for both systems, on their scales and on dynamics of their spatial-temporal structures, and by own ANS properties, as the ONES. To select influences of external systems on the Earth systems always is among major tasks of the Geomorphology. Mixing of the Systems scales and dynamics provide specific properties for the memory of Earth system. The memory of the ANS has practical value for their multi-purpose management. The knowledge of these properties is the key for research spatial-temporal GeoDynamics and Trends of Earth Nature Systems. Selection of the influences in time and space requires for special tool, requires elaboration and action of the Virtual Nature Systems (VNS), which are enliven computer doubles for analysis Geodynamics of the ANS. The Experience on the VNS enables to assess influence of each and both external factors on the ANS. It is source of knowledge for regional tectonic and climate oscillations, trends, and threats. Research by the VNS for spatial-temporal dynamics and structures of stochastic regimes of governing systems and processes results in stochastic GeoDynamics of environmental processes, in forming of false trends and blanks in natural records. This ‘wild dance' of 2D stochastic patterns and their interaction each other and generates acting structures of river nets, and of river basins, in multi-layer, multi-scale, and multi-driven structures of surface processes. It results in the Information Loss Law for observed memory of the VNS (and of external drivers) which gradually cut off own Past and distort own history. This view on the GeoDynamics appeared after long time field measurements thousand of terrace levels, hundreds of terrace ranks, and many terrace complexes in river basins of all scales - for the purpose to recognize their deforming by climatic and tectonic spatial-temporal influences. The method for following up of terrace levels along valleys was used in the Geomorphology and Geology for a long time, by linking fragments of level to ‘cycles'. It gradually linked them by heights above riverbed. The understanding of this logical mistake was happened (as insight) during observing from upstream a valley. All fragmental levels downstream were good visible, without chances for their correlation ‘by height' or ‘by number'. Instead of link of fragments, this explains process of river valleys' stochastic GeoDynamics by properties of the ONES (I. Prigogine et al., 1984) to generate oscillations. Is only first view, but later it turned to simple mechanic of Information Loss Law action in the GeoInformatics for Nature Systems (Klenov, 1980, et al.). The Information Loss distorts and destroys natural records (sources for data on the Past exogenous and endogenous rivers). This simple equation was received by multiple measures of terrace rank, and other natural records. It explains origin of false trend in natural records, destroys most own history by stochastic dynamics of the ONES. It prevents to restore of nature records as a memory of the Past. Non-disturbed is only small time between the Past and the Future, which looks like a peak between two non-linear losses. The history of Past (of the ANS, and of external drivers) are destroyed by the ANS. The Future becomes none determined due unknown 2D data of future external influences. However, the effect is the reliable Outstripping Monitoring for impending disasters and of other processes with satisfactory exactness. It was proved by direct validations (by use observed records). The conclusions are as follows: The ILL is mechanics for dissipation the Past and indeterminism the Future of the Nature. Moving back along the VNS' Phase Trajectory changes a view on natural records, and is chance to restore history of the ANS and its external drivers.

  6. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium

    NASA Astrophysics Data System (ADS)

    Horowitz, Jordan M.

    2015-07-01

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  7. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium.

    PubMed

    Horowitz, Jordan M

    2015-07-28

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  8. Asymmetric and Stochastic Behavior in Magnetic Vortices Studied by Soft X-ray Microscopy

    NASA Astrophysics Data System (ADS)

    Im, Mi-Young

    Asymmetry and stochasticity in spin processes are not only long-standing fundamental issues but also highly relevant to technological applications of nanomagnetic structures to memory and storage nanodevices. Those nontrivial phenomena have been studied by direct imaging of spin structures in magnetic vortices utilizing magnetic transmission soft x-ray microscopy (BL6.1.2 at ALS). Magnetic vortices have attracted enormous scientific interests due to their fascinating spin structures consisting of circularity rotating clockwise (c = + 1) or counter-clockwise (c = -1) and polarity pointing either up (p = + 1) or down (p = -1). We observed a symmetry breaking in the formation process of vortex structures in circular permalloy (Ni80Fe20) disks. The generation rates of two different vortex groups with the signature of cp = + 1 and cp =-1 are completely asymmetric. The asymmetric nature was interpreted to be triggered by ``intrinsic'' Dzyaloshinskii-Moriya interaction (DMI) arising from the spin-orbit coupling due to the lack of inversion symmetry near the disk surface and ``extrinsic'' factors such as roughness and defects. We also investigated the stochastic behavior of vortex creation in the arrays of asymmetric disks. The stochasticity was found to be very sensitive to the geometry of disk arrays, particularly interdisk distance. The experimentally observed phenomenon couldn't be explained by thermal fluctuation effect, which has been considered as a main reason for the stochastic behavior in spin processes. We demonstrated for the first time that the ultrafast dynamics at the early stage of vortex creation, which has a character of classical chaos significantly affects the stochastic nature observed at the steady state in asymmetric disks. This work provided the new perspective of dynamics as a critical factor contributing to the stochasticity in spin processes and also the possibility for the control of the intrinsic stochastic nature by optimizing the design of asymmetric disk arrays. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, by Leading Foreign Research Institute Recruitment Program through the NRF.

  9. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  10. Conference on Stochastic Processes and Their Applications (12th) held at Ithaca, New York on 11-15 Jul 83,

    DTIC Science & Technology

    1983-07-15

    RD- R136 626 CONFERENCE ON STOCHASTIC PROCESSES AND THEIR APPLICATIONS (12TH> JULY 11 15 1983 ITHACA NEW YORK(U) CORNELL UNIV ITHACA NY 15 JUL 83...oscillator phase Instability" 2t53 - 3s15 p.m. M.N. GOPALAN, Indian Institute of Technoloy, Bombay "Cost benefit analysis of systems subject to inspection...p.m. W. KLIEDANN, Univ. Bremen, Fed. Rep. Germany "Controllability of stochastic systems 8sO0 - lOsO0 p.m. RECEPTION Johnson Art Museum ’q % , t

  11. Variational processes and stochastic versions of mechanics

    NASA Astrophysics Data System (ADS)

    Zambrini, J. C.

    1986-09-01

    The dynamical structure of any reasonable stochastic version of classical mechanics is investigated, including the version created by Nelson [E. Nelson, Quantum Fluctuations (Princeton U.P., Princeton, NJ, 1985); Phys. Rev. 150, 1079 (1966)] for the description of quantum phenomena. Two different theories result from this common structure. One of them is the imaginary time version of Nelson's theory, whose existence was unknown, and yields a radically new probabilistic interpretation of the heat equation. The existence and uniqueness of all the involved stochastic processes is shown under conditions suggested by the variational approach of Yasue [K. Yasue, J. Math. Phys. 22, 1010 (1981)].

  12. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  13. Beyond the spectral theorem: Spectrally decomposing arbitrary functions of nondiagonalizable operators

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-06-01

    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, the familiar linear operator techniques that one would then hope to use often fail since the operators cannot be diagonalized. The curse of nondiagonalizability also plays an important role even in finite-dimensional linear operators, leading to analytical impediments that occur across many scientific domains. We show how to circumvent it via two tracks. First, using the well-known holomorphic functional calculus, we develop new practical results about spectral projection operators and the relationship between left and right generalized eigenvectors. Second, we generalize the holomorphic calculus to a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. This simultaneously simplifies and generalizes functional calculus so that it is readily applicable to analyzing complex physical systems. Together, these results extend the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics arise, including memoryful stochastic processes, open nonunitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator, highlighting the special role of the zero eigenvalue. Furthermore, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a new general method to construct it. We provide new formulae for constructing spectral projection operators and delineate the relations among projection operators, eigenvectors, and left and right generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples. First, we analyze stochastic transition operators in discrete and continuous time. Second, we show that nondiagonalizability can be a robust feature of a stochastic process, induced even by simple counting. As a result, we directly derive distributions of the time-dependent Poisson process and point out that nondiagonalizability is intrinsic to it and the broad class of hidden semi-Markov processes. Third, we show that the Drazin inverse arises naturally in stochastic thermodynamics and that applying the meromorphic functional calculus provides closed-form solutions for the dynamics of key thermodynamic observables. Finally, we draw connections to the Ruelle-Frobenius-Perron and Koopman operators for chaotic dynamical systems and propose how to extract eigenvalues from a time-series.

  14. A Simple Mechanism for Cooperation in the Well-Mixed Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2008-11-01

    I show that the addition of Gaussian noise to the payoffs is able to stabilize cooperation in well-mixed populations, where individuals play the prisoner's dilemma game. The impact of stochasticity on the evolutionary dynamics can be expressed deterministically via a simple small-noise expansion of multiplicative noisy terms. In particular, cooperation emerges as a stable noise-induced steady state in the replicator dynamics. Due to the generality of the employed theoretical framework, presented results should prove valuable in various scientific disciplines, ranging from economy to ecology.

  15. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  16. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  17. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  18. Stochastic resonant damping in a noisy monostable system: theory and experiment.

    PubMed

    Volpe, Giovanni; Perrone, Sandro; Rubi, J Miguel; Petrov, Dmitri

    2008-05-01

    Usually in the presence of a background noise an increased effort put in controlling a system stabilizes its behavior. Rarely it is thought that an increased control of the system can lead to a looser response and, therefore, to a poorer performance. Strikingly there are many systems that show this weird behavior; examples can be drawn form physical, biological, and social systems. Until now no simple and general mechanism underlying such behaviors has been identified. Here we show that such a mechanism, named stochastic resonant damping, can be provided by the interplay between the background noise and the control exerted on the system. We experimentally verify our prediction on a physical model system based on a colloidal particle held in an oscillating optical potential. Our result adds a tool for the study of intrinsically noisy phenomena, joining the many constructive facets of noise identified in the past decades-for example, stochastic resonance, noise-induced activation, and Brownian ratchets.

  19. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  20. Sensory Optimization by Stochastic Tuning

    PubMed Central

    Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees

    2013-01-01

    Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PMID:24219849

  1. Active Brownian Particles. From Individual to Collective Stochastic Dynamics

    NASA Astrophysics Data System (ADS)

    Romanczuk, P.; Bär, M.; Ebeling, W.; Lindner, B.; Schimansky-Geier, L.

    2012-03-01

    We review theoretical models of individual motility as well as collective dynamics and pattern formation of active particles. We focus on simple models of active dynamics with a particular emphasis on nonlinear and stochastic dynamics of such self-propelled entities in the framework of statistical mechanics. Examples of such active units in complex physico-chemical and biological systems are chemically powered nano-rods, localized patterns in reaction-diffusion system, motile cells or macroscopic animals. Based on the description of individual motion of point-like active particles by stochastic differential equations, we discuss different velocity-dependent friction functions, the impact of various types of fluctuations and calculate characteristic observables such as stationary velocity distributions or diffusion coefficients. Finally, we consider not only the free and confined individual active dynamics but also different types of interaction between active particles. The resulting collective dynamical behavior of large assemblies and aggregates of active units is discussed and an overview over some recent results on spatiotemporal pattern formation in such systems is given.

  2. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  3. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  4. General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems

    DTIC Science & Technology

    1988-01-01

    P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans

  5. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  6. Study on Stationarity of Random Load Spectrum Based on the Special Road

    NASA Astrophysics Data System (ADS)

    Yan, Huawen; Zhang, Weigong; Wang, Dong

    2017-09-01

    In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.

  7. A Stochastic Detection and Retrieval Model for the Study of Metacognition

    ERIC Educational Resources Information Center

    Jang, Yoonhee; Wallsten, Thomas S.; Huber, David E.

    2012-01-01

    We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to…

  8. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  9. Stochastic Multiscale Analysis and Design of Engine Disks

    DTIC Science & Technology

    2010-07-28

    shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University

  10. Transcriptional dynamics with time-dependent reaction rates

    NASA Astrophysics Data System (ADS)

    Nandi, Shubhendu; Ghosh, Anandamohan

    2015-02-01

    Transcription is the first step in the process of gene regulation that controls cell response to varying environmental conditions. Transcription is a stochastic process, involving synthesis and degradation of mRNAs, that can be modeled as a birth-death process. We consider a generic stochastic model, where the fluctuating environment is encoded in the time-dependent reaction rates. We obtain an exact analytical expression for the mRNA probability distribution and are able to analyze the response for arbitrary time-dependent protocols. Our analytical results and stochastic simulations confirm that the transcriptional machinery primarily act as a low-pass filter. We also show that depending on the system parameters, the mRNA levels in a cell population can show synchronous/asynchronous fluctuations and can deviate from Poisson statistics.

  11. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis

    NASA Astrophysics Data System (ADS)

    Jia, Ningning; Y Lam, Edmund

    2010-04-01

    Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.

  12. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  13. Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes

    NASA Astrophysics Data System (ADS)

    González Arenas, Zochil; Barci, Daniel G.

    2012-12-01

    Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.

  14. How does the past of a soccer match influence its future? Concepts and statistical analysis.

    PubMed

    Heuer, Andreas; Rubner, Oliver

    2012-01-01

    Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified.

  15. How Does the Past of a Soccer Match Influence Its Future? Concepts and Statistical Analysis

    PubMed Central

    Heuer, Andreas; Rubner, Oliver

    2012-01-01

    Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified. PMID:23226200

  16. SMSIM--Fortran programs for simulating ground motions from earthquakes: Version 2.0.--a revision of OFR 96-80-A

    USGS Publications Warehouse

    Boore, David M.

    2000-01-01

    A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.

  17. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  18. Control strategies for a stochastic model of host-parasite interaction in a seasonal environment.

    PubMed

    Gómez-Corral, A; López García, M

    2014-08-07

    We examine a nonlinear stochastic model for the parasite load of a single host over a predetermined time interval. We use nonhomogeneous Poisson processes to model the acquisition of parasites, the parasite-induced host mortality, the natural (no parasite-induced) host mortality, and the reproduction and death of parasites within the host. Algebraic results are first obtained on the age-dependent distribution of the number of parasites infesting the host at an arbitrary time t. The interest is in control strategies based on isolation of the host and the use of an anthelmintic at a certain intervention instant t0. This means that the host is free living in a seasonal environment, and it is transferred to a uninfected area at age t0. In the uninfected area, the host does not acquire new parasites, undergoes a treatment to decrease the parasite load, and its natural and parasite-induced mortality are altered. For a suitable selection of t0, we present two control criteria that appropriately balance effectiveness and cost of intervention. Our approach is based on simple probabilistic principles, and it allows us to examine seasonal fluctuations of gastrointestinal nematode burden in growing lambs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems.

    PubMed

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-21

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  20. Evolution of a plastic quantitative trait in an age-structured population in a fluctuating environment.

    PubMed

    Engen, Steinar; Lande, Russell; Saether, Bernt-Erik

    2011-10-01

    We analyze weak fluctuating selection on a quantitative character in an age-structured population not subject to density regulation. We assume that early in the first year of life before selection, during a critical state of development, environments exert a plastic effect on the phenotype, which remains constant throughout the life of an individual. Age-specific selection on the character affects survival and fecundity, which have intermediate optima subject to temporal environmental fluctuations with directional selection in some age classes as special cases. Weighting individuals by their reproductive value, as suggested by Fisher, we show that the expected response per year in the weighted mean character has the same form as for models with no age structure. Environmental stochasticity generates stochastic fluctuations in the weighted mean character following a first-order autoregressive model with a temporally autocorrelated noise term and stationary variance depending on the amount of phenotypic plasticity. The parameters of the process are simple weighted averages of parameters used to describe age-specific survival and fecundity. The "age-specific selective weights" are related to the stable distribution of reproductive values among age classes. This allows partitioning of the change in the weighted mean character into age-specific components. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  1. A computational theory for the classification of natural biosonar targets based on a spike code.

    PubMed

    Müller, Rolf

    2003-08-01

    A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.

  2. Characterizing time series via complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  3. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-01

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  4. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  5. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  6. Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.

    PubMed

    Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A

    2007-12-01

    By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.

  7. Laplace transform analysis of a multiplicative asset transfer model

    NASA Astrophysics Data System (ADS)

    Sokolov, Andrey; Melatos, Andrew; Kieu, Tien

    2010-07-01

    We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.

  8. Complementary mode analyses between sub- and superdiffusion

    NASA Astrophysics Data System (ADS)

    Saito, Takuya; Sakaue, Takahiro

    2017-04-01

    Several subdiffusive stochastic processes in nature, e.g., the motion of a tagged monomer in polymers, the height fluctuation of interfaces, particle dynamics in single-file diffusion, etc., can be described rigorously or approximately by the superposition of various modes whose relaxation times are broadly distributed. In this paper, we propose a mode analysis generating superdiffusion, which is paired with or complementary to subdiffusion. The key point in our discussion lies in the identification of a pair of conjugated variables, which undergo sub- and superdiffusion, respectively. We provide a simple interpretation for the sub- and superdiffusion duality for these variables using the language of polymer physics. The analysis also suggests the usefulness of looking at the force fluctuation in experiments, where a polymer is driven by a constant velocity.

  9. Zero-crossing statistics for non-Markovian time series.

    PubMed

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  10. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  11. Theory of remote entanglement via quantum-limited phase-preserving amplification

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Zalys-Geller, Evan; Hatridge, Michael; Leghtas, Zaki; Devoret, Michel H.; Girvin, S. M.

    2016-06-01

    We show that a quantum-limited phase-preserving amplifier can act as a which-path information eraser when followed by heterodyne detection. This "beam splitter with gain" implements a continuous joint measurement on the signal sources. As an application, we propose heralded concurrent remote entanglement generation between two qubits coupled dispersively to separate cavities. Dissimilar qubit-cavity pairs can be made indistinguishable by simple engineering of the cavity driving fields providing further experimental flexibility and the prospect for scalability. Additionally, we find an analytic solution for the stochastic master equation, a quantum filter, yielding a thorough physical understanding of the nonlinear measurement process leading to an entangled state of the qubits. We determine the concurrence of the entangled states and analyze its dependence on losses and measurement inefficiencies.

  12. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  13. Zero-crossing statistics for non-Markovian time series

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  14. The role of stochastic storms on hillslope runoff generation and connectivity in a dryland basin

    NASA Astrophysics Data System (ADS)

    Michaelides, K.; Singer, M. B.; Mudd, S. M.

    2016-12-01

    Despite low annual rainfall, dryland basins can generate significant surface runoff during certain rainstorms, which can cause flash flooding and high rates of erosion. However, it remains challenging to anticipate the nature and frequency of runoff generation in hydrological systems which are driven by spatially and temporally stochastic rainstorms. In particular, the stochasticity of rainfall presents challenges to simulating the hydrological response of dryland basins and understanding flow connectivity from hillslopes to the channel. Here we simulate hillslope runoff generation using rainfall characteristics produced by a simple stochastic rainfall generator, which is based on a rich rainfall dataset from the Walnut Gulch Experimental Watershed (WGEW) in Arizona, USA. We assess hillslope runoff generation using the hydrological model, COUP2D, driven by a subset of characteristic output from multiple ensembles of decadal monsoonal rainfall from the stochastic rainfall generator. The rainfall generator operates across WGEW by simulating storms with areas smaller than the basin and enables explicit characterization of rainfall characteristics at any location. We combine the characteristics of rainfall intensity and duration with data on rainstorm area and location to model the surface runoff properties (depth, velocity, duration, distance downslope) on a range of hillslopes within the basin derived from LiDAR analysis. We also analyze connectivity of flow from hillslopes to the channel for various combinations of hillslopes and storms. This approach provides a framework for understanding spatial and temporal dynamics of runoff generation and connectivity that is faithful to the hydrological characteristics of dryland environments.

  15. A stochastic model for the probability of malaria extinction by mass drug administration.

    PubMed

    Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A

    2017-09-18

    Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c  < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.

  16. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  17. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  18. Non-linear stochastic growth rates and redshift space distortions

    DOE PAGES

    Jennings, Elise; Jennings, David

    2015-04-09

    The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean , together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc -1 to 25 per cent atmore » k ~ 0.45 h Mpc -1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 10 12 M ⊙ h -1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean away from the linear theory prediction -f LTδ, where f LT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc -1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of f LT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of f LT extracted using models which assume a linear, deterministic expression.« less

  19. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  20. Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpert, Peter A.; Knopf, Daniel A.

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less

Top