Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.
Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai
2006-09-08
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.
Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young
2017-03-14
Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.
Oscillatory Regulation of Hes1: Discrete Stochastic Delay Modelling and Simulation
Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai
2006-01-01
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein. PMID:16965175
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Discrete stochastic simulation methods for chemically reacting systems.
Cao, Yang; Samuels, David C
2009-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.
2017-01-05
module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches.
Pahle, Jürgen
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
NASA Astrophysics Data System (ADS)
Wei, Xinjiang; Sun, Shixiang
2018-03-01
An elegant anti-disturbance control (EADC) strategy for a class of discrete-time stochastic systems with both nonlinearity and multiple disturbances, which include the disturbance with partially known information and a sequence of random vectors, is proposed in this paper. A stochastic disturbance observer is constructed to estimate the disturbance with partially known information, based on which, an EADC scheme is proposed by combining pole placement and linear matrix inequality methods. It is proved that the two different disturbances can be rejected and attenuated, and the corresponding desired performances can be guaranteed for discrete-time stochastic systems with known and unknown nonlinear dynamics, respectively. Simulation examples are given to demonstrate the effectiveness of the proposed schemes compared with some existing results.
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Hybrid stochastic simplifications for multiscale gene networks.
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-09-07
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
Adalsteinsson, David; McMillen, David; Elston, Timothy C
2004-03-08
Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
NASA Astrophysics Data System (ADS)
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less
Cross-Paradigm Simulation Modeling: Challenges and Successes
2011-12-01
is also highlighted. 2.1 Discrete-Event Simulation Discrete-event simulation ( DES ) is a modeling method for stochastic, dynamic models where...which almost anything can be coded; models can be incredibly detailed. Most commercial DES software has a graphical interface which allows the user to...results. Although the above definition is the commonly accepted definition of DES , there are two different worldviews that dominate DES modeling today: a
Finite Element Aircraft Simulation of Turbulence
DOT National Transportation Integrated Search
1997-02-01
A Simulation of Rotor Blade Element Turbulence (SORBET) model has been : developed for realtime aircraft simulation that accommodates stochastic : turbulence and distributed discrete gusts as a function of the terrain. This : model is applicable to c...
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
Practical Unitary Simulator for Non-Markovian Complex Processes
NASA Astrophysics Data System (ADS)
Binder, Felix C.; Thompson, Jayne; Gu, Mile
2018-06-01
Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.
Strehl, Robert; Ilie, Silvana
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Sliding mode control-based linear functional observers for discrete-time stochastic systems
NASA Astrophysics Data System (ADS)
Singh, Satnesh; Janardhanan, Sivaramakrishnan
2017-11-01
Sliding mode control (SMC) is one of the most popular techniques to stabilise linear discrete-time stochastic systems. However, application of SMC becomes difficult when the system states are not available for feedback. This paper presents a new approach to design a SMC-based functional observer for discrete-time stochastic systems. The functional observer is based on the Kronecker product approach. Existence conditions and stability analysis of the proposed observer are given. The control input is estimated by a novel linear functional observer. This approach leads to a non-switching type of control, thereby eliminating the fundamental cause of chatter. Furthermore, the functional observer is designed in such a way that the effect of process and measurement noise is minimised. Simulation example is given to illustrate and validate the proposed design method.
Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.
Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L
2017-07-01
Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R
2009-03-07
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
An advanced environment for hybrid modeling of biological systems based on modelica.
Pross, Sabrina; Bachmann, Bernhard
2011-01-20
Biological systems are often very complex so that an appropriate formalism is needed for modeling their behavior. Hybrid Petri Nets, consisting of time-discrete Petri Net elements as well as continuous ones, have proven to be ideal for this task. Therefore, a new Petri Net library was implemented based on the object-oriented modeling language Modelica which allows the modeling of discrete, stochastic and continuous Petri Net elements by differential, algebraic and discrete equations. An appropriate Modelica-tool performs the hybrid simulation with discrete events and the solution of continuous differential equations. A special sub-library contains so-called wrappers for specific reactions to simplify the modeling process. The Modelica-models can be connected to Simulink-models for parameter optimization, sensitivity analysis and stochastic simulation in Matlab. The present paper illustrates the implementation of the Petri Net component models, their usage within the modeling process and the coupling between the Modelica-tool Dymola and Matlab/Simulink. The application is demonstrated by modeling the metabolism of Chinese Hamster Ovary Cells.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less
Stochastic flux analysis of chemical reaction networks
2013-01-01
Background Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. Results We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. Conclusions We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network. PMID:24314153
Stochastic flux analysis of chemical reaction networks.
Kahramanoğulları, Ozan; Lynch, James F
2013-12-07
Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network.
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Discrete stochastic analogs of Erlang epidemic models.
Getz, Wayne M; Dougherty, Eric R
2018-12-01
Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Simulation of stochastic diffusion via first exit times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lötstedt, Per, E-mail: perl@it.uu.se; Meinecke, Lina, E-mail: lina.meinecke@it.uu.se
2015-11-01
In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a methodmore » based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions.« less
Simulation of stochastic diffusion via first exit times
Lötstedt, Per; Meinecke, Lina
2015-01-01
In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a method based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions. PMID:26600600
The subtle business of model reduction for stochastic chemical kinetics
NASA Astrophysics Data System (ADS)
Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.
2009-02-01
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
The subtle business of model reduction for stochastic chemical kinetics.
Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R
2009-02-14
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.
Arunkumar, A; Sakthivel, R; Mathiyalagan, K; Park, Ju H
2014-07-01
This paper focuses the issue of robust stochastic stability for a class of uncertain fuzzy Markovian jumping discrete-time neural networks (FMJDNNs) with various activation functions and mixed time delay. By employing the Lyapunov technique and linear matrix inequality (LMI) approach, a new set of delay-dependent sufficient conditions are established for the robust stochastic stability of uncertain FMJDNNs. More precisely, the parameter uncertainties are assumed to be time varying, unknown and norm bounded. The obtained stability conditions are established in terms of LMIs, which can be easily checked by using the efficient MATLAB-LMI toolbox. Finally, numerical examples with simulation result are provided to illustrate the effectiveness and less conservativeness of the obtained results. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Discreteness-induced concentration inversion in mesoscopic chemical systems.
Ramaswamy, Rajesh; González-Segredo, Nélido; Sbalzarini, Ivo F; Grima, Ramon
2012-04-10
Molecular discreteness is apparent in small-volume chemical systems, such as biological cells, leading to stochastic kinetics. Here we present a theoretical framework to understand the effects of discreteness on the steady state of a monostable chemical reaction network. We consider independent realizations of the same chemical system in compartments of different volumes. Rate equations ignore molecular discreteness and predict the same average steady-state concentrations in all compartments. However, our theory predicts that the average steady state of the system varies with volume: if a species is more abundant than another for large volumes, then the reverse occurs for volumes below a critical value, leading to a concentration inversion effect. The addition of extrinsic noise increases the size of the critical volume. We theoretically predict the critical volumes and verify, by exact stochastic simulations, that rate equations are qualitatively incorrect in sub-critical volumes.
NASA Astrophysics Data System (ADS)
Havaej, Mohsen; Coggan, John; Stead, Doug; Elmo, Davide
2016-04-01
Rock slope geometry and discontinuity properties are among the most important factors in realistic rock slope analysis yet they are often oversimplified in numerical simulations. This is primarily due to the difficulties in obtaining accurate structural and geometrical data as well as the stochastic representation of discontinuities. Recent improvements in both digital data acquisition and incorporation of discrete fracture network data into numerical modelling software have provided better tools to capture rock mass characteristics, slope geometries and digital terrain models allowing more effective modelling of rock slopes. Advantages of using improved data acquisition technology include safer and faster data collection, greater areal coverage, and accurate data geo-referencing far exceed limitations due to orientation bias and occlusion. A key benefit of a detailed point cloud dataset is the ability to measure and evaluate discontinuity characteristics such as orientation, spacing/intensity and persistence. This data can be used to develop a discrete fracture network which can be imported into the numerical simulations to study the influence of the stochastic nature of the discontinuities on the failure mechanism. We demonstrate the application of digital terrestrial photogrammetry in discontinuity characterization and distinct element simulations within a slate quarry. An accurately geo-referenced photogrammetry model is used to derive the slope geometry and to characterize geological structures. We first show how a discontinuity dataset, obtained from a photogrammetry model can be used to characterize discontinuities and to develop discrete fracture networks. A deterministic three-dimensional distinct element model is then used to investigate the effect of some key input parameters (friction angle, spacing and persistence) on the stability of the quarry slope model. Finally, adopting a stochastic approach, discrete fracture networks are used as input for 3D distinct element simulations to better understand the stochastic nature of the geological structure and its effect on the quarry slope failure mechanism. The numerical modelling results highlight the influence of discontinuity characteristics and kinematics on the slope failure mechanism and the variability in the size and shape of the failed blocks.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Hybrid stochastic simulations of intracellular reaction-diffusion systems.
Kalantzis, Georgios
2009-06-01
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Allore, H G; Schruben, L W; Erb, H N; Oltenacu, P A
1998-03-01
A dynamic stochastic simulation model for discrete events, SIMMAST, was developed to simulate the effect of mastitis on the composition of the bulk tank milk of dairy herds. Intramammary infections caused by Streptococcus agalactiae, Streptococcus spp. other than Strep. agalactiae, Staphylococcus aureus, and coagulase-negative staphylococci were modeled as were the milk, fat, and protein test day solutions for individual cows, which accounted for the fixed effects of days in milk, age at calving, season of calving, somatic cell count (SCC), and random effects of test day, cow yield differences from herdmates, and autocorrelated errors. Probabilities for the transitions among various states of udder health (uninfected or subclinically or clinically infected) were calculated to account for exposure, heifer infection, spontaneous recovery, lactation cure, infection or cure during the dry period, month of lactation, parity, within-herd yields, and the number of quarters with clinical intramammary infection in the previous and current lactations. The stochastic simulation model was constructed using estimates from the literature and also using data from 164 herds enrolled with Quality Milk Promotion Services that each had bulk tank SCC between 500,000 and 750,000/ml. Model parameters and outputs were validated against a separate data file of 69 herds from the Northeast Dairy Herd Improvement Association, each with a bulk tank SCC that was > or = 500,000/ml. Sensitivity analysis was performed on all input parameters for control herds. Using the validated stochastic simulation model, the control herds had a stable time average bulk tank SCC between 500,000 and 750,000/ml.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
The adaptation rate of a quantitative trait in an environmental gradient
NASA Astrophysics Data System (ADS)
Hermsen, R.
2016-12-01
The spatial range of a species habitat is generally determined by the ability of the species to cope with biotic and abiotic variables that vary in space. Therefore, the species range is itself an evolvable property. Indeed, environmental gradients permit a mode of evolution in which range expansion and adaptation go hand in hand. This process can contribute to rapid evolution of drug resistant bacteria and viruses, because drug concentrations in humans and livestock treated with antibiotics are far from uniform. Here, we use a minimal stochastic model of discrete, interacting organisms evolving in continuous space to study how the rate of adaptation of a quantitative trait depends on the steepness of the gradient and various population parameters. We discuss analytical results for the mean-field limit as well as extensive stochastic simulations. These simulations were performed using an exact, event-driven simulation scheme that can deal with continuous time-, density- and coordinate-dependent reaction rates and could be used for a wide variety of stochastic systems. The results reveal two qualitative regimes. If the gradient is shallow, the rate of adaptation is limited by dispersion and increases linearly with the gradient slope. If the gradient is steep, the adaptation rate is limited by mutation. In this regime, the mean-field result is highly misleading: it predicts that the adaptation rate continues to increase with the gradient slope, whereas stochastic simulations show that it in fact decreases with the square root of the slope. This discrepancy underscores the importance of discreteness and stochasticity even at high population densities; mean-field results, including those routinely used in quantitative genetics, should be interpreted with care.
The adaptation rate of a quantitative trait in an environmental gradient.
Hermsen, R
2016-11-30
The spatial range of a species habitat is generally determined by the ability of the species to cope with biotic and abiotic variables that vary in space. Therefore, the species range is itself an evolvable property. Indeed, environmental gradients permit a mode of evolution in which range expansion and adaptation go hand in hand. This process can contribute to rapid evolution of drug resistant bacteria and viruses, because drug concentrations in humans and livestock treated with antibiotics are far from uniform. Here, we use a minimal stochastic model of discrete, interacting organisms evolving in continuous space to study how the rate of adaptation of a quantitative trait depends on the steepness of the gradient and various population parameters. We discuss analytical results for the mean-field limit as well as extensive stochastic simulations. These simulations were performed using an exact, event-driven simulation scheme that can deal with continuous time-, density- and coordinate-dependent reaction rates and could be used for a wide variety of stochastic systems. The results reveal two qualitative regimes. If the gradient is shallow, the rate of adaptation is limited by dispersion and increases linearly with the gradient slope. If the gradient is steep, the adaptation rate is limited by mutation. In this regime, the mean-field result is highly misleading: it predicts that the adaptation rate continues to increase with the gradient slope, whereas stochastic simulations show that it in fact decreases with the square root of the slope. This discrepancy underscores the importance of discreteness and stochasticity even at high population densities; mean-field results, including those routinely used in quantitative genetics, should be interpreted with care.
On the origins of approximations for stochastic chemical kinetics.
Haseltine, Eric L; Rawlings, James B
2005-10-22
This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.
Appropriate Domain Size for Groundwater Flow Modeling with a Discrete Fracture Network Model.
Ji, Sung-Hoon; Koh, Yong-Kwon
2017-01-01
When a discrete fracture network (DFN) is constructed from statistical conceptualization, uncertainty in simulating the hydraulic characteristics of a fracture network can arise due to the domain size. In this study, the appropriate domain size, where less significant uncertainty in the stochastic DFN model is expected, was suggested for the Korea Atomic Energy Research Institute Underground Research Tunnel (KURT) site. The stochastic DFN model for the site was established, and the appropriate domain size was determined with the density of the percolating cluster and the percolation probability using the stochastically generated DFNs for various domain sizes. The applicability of the appropriate domain size to our study site was evaluated by comparing the statistical properties of stochastically generated fractures of varying domain sizes and estimating the uncertainty in the equivalent permeability of the generated DFNs. Our results show that the uncertainty of the stochastic DFN model is acceptable when the modeling domain is larger than the determined appropriate domain size, and the appropriate domain size concept is applicable to our study site. © 2016, National Ground Water Association.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.
2014-01-01
In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Stochastic maps, continuous approximation, and stable distribution
NASA Astrophysics Data System (ADS)
Kessler, David A.; Burov, Stanislav
2017-10-01
A continuous approximation framework for general nonlinear stochastic as well as deterministic discrete maps is developed. For the stochastic map with uncorelated Gaussian noise, by successively applying the Itô lemma, we obtain a Langevin type of equation. Specifically, we show how nonlinear maps give rise to a Langevin description that involves multiplicative noise. The multiplicative nature of the noise induces an additional effective force, not present in the absence of noise. We further exploit the continuum description and provide an explicit formula for the stable distribution of the stochastic map and conditions for its existence. Our results are in good agreement with numerical simulations of several maps.
Hierarchy of forward-backward stochastic Schrödinger equation
NASA Astrophysics Data System (ADS)
Ke, Yaling; Zhao, Yi
2016-07-01
Driven by the impetus to simulate quantum dynamics in photosynthetic complexes or even larger molecular aggregates, we have established a hierarchy of forward-backward stochastic Schrödinger equation in the light of stochastic unravelling of the symmetric part of the influence functional in the path-integral formalism of reduced density operator. The method is numerically exact and is suited for Debye-Drude spectral density, Ohmic spectral density with an algebraic or exponential cutoff, as well as discrete vibrational modes. The power of this method is verified by performing the calculations of time-dependent population differences in the valuable spin-boson model from zero to high temperatures. By simulating excitation energy transfer dynamics of the realistic full FMO trimer, some important features are revealed.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
NASA Astrophysics Data System (ADS)
Gottwald, Georg; Melbourne, Ian
2013-04-01
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
Yifat, Jonathan; Gannot, Israel
2015-03-01
Early detection of malignant tumors plays a crucial role in the survivability chances of the patient. Therefore, new and innovative tumor detection methods are constantly searched for. Tumor-specific magnetic-core nano-particles can be used with an alternating magnetic field to detect and treat tumors by hyperthermia. For the analysis of the method effectiveness, the bio-heat transfer between the nanoparticles and the tissue must be carefully studied. Heat diffusion in biological tissue is usually analyzed using the Pennes Bio-Heat Equation, where blood perfusion plays an important role. Malignant tumors are known to initiate an angiogenesis process, where endothelial cell migration from neighboring vasculature eventually leads to the formation of a thick blood capillary network around them. This process allows the tumor to receive its extensive nutrition demands and evolve into a more progressive and potentially fatal tumor. In order to assess the effect of angiogenesis on the bio-heat transfer problem, we have developed a discrete stochastic 3D model & simulation of tumor-induced angiogenesis. The model elaborates other angiogenesis models by providing high resolution 3D stochastic simulation, capturing of fine angiogenesis morphological features, effects of dynamic sprout thickness functions, and stochastic parent vessel generator. We show that the angiogenesis realizations produced are well suited for numerical bio-heat transfer analysis. Statistical study on the angiogenesis characteristics was derived using Monte Carlo simulations. According to the statistical analysis, we provide analytical expression for the blood perfusion coefficient in the Pennes equation, as a function of several parameters. This updated form of the Pennes equation could be used for numerical and analytical analyses of the proposed detection and treatment method. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Finite Element Aircraft Simulation of Turbulence
NASA Technical Reports Server (NTRS)
McFarland, R. E.
1997-01-01
A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
Stochastic simulations on a model of circadian rhythm generation.
Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin
2008-01-01
Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Stochastic simulation in systems biology
Székely, Tamás; Burrage, Kevin
2014-01-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when onemore » or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.« less
Stochastic Evolution Equations Driven by Fractional Noises
2016-11-28
rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms ofmore » a suggested framework model based on discrete event simulation.« less
Complex discrete dynamics from simple continuous population models.
Gamarra, Javier G P; Solé, Ricard V
2002-05-01
Nonoverlapping generations have been classically modelled as difference equations in order to account for the discrete nature of reproductive events. However, other events such as resource consumption or mortality are continuous and take place in the within-generation time. We have realistically assumed a hybrid ODE bidimensional model of resources and consumers with discrete events for reproduction. Numerical and analytical approaches showed that the resulting dynamics resembles a Ricker map, including the doubling route to chaos. Stochastic simulations with a handling-time parameter for indirect competition of juveniles may affect the qualitative behaviour of the model.
A framework for discrete stochastic simulation on 3D moving boundary domains
Drawert, Brian; Hellander, Stefan; Trogdon, Michael; ...
2016-11-14
We have developed a method for modeling spatial stochastic biochemical reactions in complex, three-dimensional, and time-dependent domains using the reaction-diffusion master equation formalism. In particular, we look to address the fully coupled problems that arise in systems biology where the shape and mechanical properties of a cell are determined by the state of the biochemistry and vice versa. To validate our method and characterize the error involved, we compare our results for a carefully constructed test problem to those of a microscale implementation. Finally, we demonstrate the effectiveness of our method by simulating a model of polarization and shmoo formationmore » during the mating of yeast. The method is generally applicable to problems in systems biology where biochemistry and mechanics are coupled, and spatial stochastic effects are critical.« less
Numerical methods for the stochastic Landau-Lifshitz Navier-Stokes equations.
Bell, John B; Garcia, Alejandro L; Williams, Sarah A
2007-07-01
The Landau-Lifshitz Navier-Stokes (LLNS) equations incorporate thermal fluctuations into macroscopic hydrodynamics by using stochastic fluxes. This paper examines explicit Eulerian discretizations of the full LLNS equations. Several computational fluid dynamics approaches are considered (including MacCormack's two-step Lax-Wendroff scheme and the piecewise parabolic method) and are found to give good results for the variance of momentum fluctuations. However, neither of these schemes accurately reproduces the fluctuations in energy or density. We introduce a conservative centered scheme with a third-order Runge-Kutta temporal integrator that does accurately produce fluctuations in density, energy, and momentum. A variety of numerical tests, including the random walk of a standing shock wave, are considered and results from the stochastic LLNS solver are compared with theory, when available, and with molecular simulations using a direct simulation Monte Carlo algorithm.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Parallel discrete-event simulation of FCFS stochastic queueing networks
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
On the use of reverse Brownian motion to accelerate hybrid simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J
2014-09-01
The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.
State-and-transition simulation models: a framework for forecasting landscape change
Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée
2016-01-01
SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.
Stochastic effects in a discretized kinetic model of economic exchange
NASA Astrophysics Data System (ADS)
Bertotti, M. L.; Chattopadhyay, A. K.; Modanese, G.
2017-04-01
Linear stochastic models and discretized kinetic theory are two complementary analytical techniques used for the investigation of complex systems of economic interactions. The former employ Langevin equations, with an emphasis on stock trade; the latter is based on systems of ordinary differential equations and is better suited for the description of binary interactions, taxation and welfare redistribution. We propose a new framework which establishes a connection between the two approaches by introducing random fluctuations into the kinetic model based on Langevin and Fokker-Planck formalisms. Numerical simulations of the resulting model indicate positive correlations between the Gini index and the total wealth, that suggest a growing inequality with increasing income. Further analysis shows, in the presence of a conserved total wealth, a simultaneous decrease in inequality as social mobility increases, in conformity with economic data.
Stochastic approach and fluctuation theorem for charge transport in diodes
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Gaspard, Pierre
2018-05-01
A stochastic approach for charge transport in diodes is developed in consistency with the laws of electricity, thermodynamics, and microreversibility. In this approach, the electron and hole densities are ruled by diffusion-reaction stochastic partial differential equations and the electric field generated by the charges is determined with the Poisson equation. These equations are discretized in space for the numerical simulations of the mean density profiles, the mean electric potential, and the current-voltage characteristics. Moreover, the full counting statistics of the carrier current and the measured total current including the contribution of the displacement current are investigated. On the basis of local detailed balance, the fluctuation theorem is shown to hold for both currents.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.
Caro, J Jaime
2016-07-01
Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.
A robust nonparametric framework for reconstruction of stochastic differential equation models
NASA Astrophysics Data System (ADS)
Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza
2016-05-01
In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Marchetti, Luca; Reali, Federico; Priami, Corrado
2018-02-01
The stochastic simulation algorithm (SSA) has been widely used for simulating biochemical reaction networks. SSA is able to capture the inherently intrinsic noise of the biological system, which is due to the discreteness of species population and to the randomness of their reciprocal interactions. However, SSA does not consider other sources of heterogeneity in biochemical reaction systems, which are referred to as extrinsic noise. Here, we extend two simulation approaches, namely, the integration-based method and the rejection-based method, to take extrinsic noise into account by allowing the reaction propensities to vary in time and state dependent manner. For both methods, new efficient implementations are introduced and their efficiency and applicability to biological models are investigated. Our numerical results suggest that the rejection-based method performs better than the integration-based method when the extrinsic noise is considered.
NASA Astrophysics Data System (ADS)
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chuchu, E-mail: chenchuchu@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Zhang, Liying, E-mail: lyzhang@lsec.cc.ac.cn
Stochastic Maxwell equations with additive noise are a system of stochastic Hamiltonian partial differential equations intrinsically, possessing the stochastic multi-symplectic conservation law. It is shown that the averaged energy increases linearly with respect to the evolution of time and the flow of stochastic Maxwell equations with additive noise preserves the divergence in the sense of expectation. Moreover, we propose three novel stochastic multi-symplectic methods to discretize stochastic Maxwell equations in order to investigate the preservation of these properties numerically. We make theoretical discussions and comparisons on all of the three methods to observe that all of them preserve the correspondingmore » discrete version of the averaged divergence. Meanwhile, we obtain the corresponding dissipative property of the discrete averaged energy satisfied by each method. Especially, the evolution rates of the averaged energies for all of the three methods are derived which are in accordance with the continuous case. Numerical experiments are performed to verify our theoretical results.« less
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Diffusion of multiple species with excluded-volume effects.
Bruna, Maria; Chapman, S Jonathan
2012-11-28
Stochastic models of diffusion with excluded-volume effects are used to model many biological and physical systems at a discrete level. The average properties of the population may be described by a continuum model based on partial differential equations. In this paper we consider multiple interacting subpopulations/species and study how the inter-species competition emerges at the population level. Each individual is described as a finite-size hard core interacting particle undergoing brownian motion. The link between the discrete stochastic equations of motion and the continuum model is considered systematically using the method of matched asymptotic expansions. The system for two species leads to a nonlinear cross-diffusion system for each subpopulation, which captures the enhancement of the effective diffusion rate due to excluded-volume interactions between particles of the same species, and the diminishment due to particles of the other species. This model can explain two alternative notions of the diffusion coefficient that are often confounded, namely collective diffusion and self-diffusion. Simulations of the discrete system show good agreement with the analytic results.
NASA Astrophysics Data System (ADS)
Gholizadeh Doonechaly, N.; Rahman, S. S.
2012-05-01
Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
NASA Astrophysics Data System (ADS)
Sun, Ying; Ding, Derui; Zhang, Sunjie; Wei, Guoliang; Liu, Hongjian
2018-07-01
In this paper, the non-fragile ?-? control problem is investigated for a class of discrete-time stochastic nonlinear systems under event-triggered communication protocols, which determine whether the measurement output should be transmitted to the controller or not. The main purpose of the addressed problem is to design an event-based output feedback controller subject to gain variations guaranteeing the prescribed disturbance attenuation level described by the ?-? performance index. By utilizing the Lyapunov stability theory combined with S-procedure, a sufficient condition is established to guarantee both the exponential mean-square stability and the ?-? performance for the closed-loop system. In addition, with the help of the orthogonal decomposition, the desired controller parameter is obtained in terms of the solution to certain linear matrix inequalities. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed event-based controller design scheme.
Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity
NASA Astrophysics Data System (ADS)
Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István
2017-02-01
Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.
Garijo, N; Manzano, R; Osta, R; Perez, M A
2012-12-07
Cell migration and proliferation has been modelled in the literature as a process similar to diffusion. However, using diffusion models to simulate the proliferation and migration of cells tends to create a homogeneous distribution in the cell density that does not correlate to empirical observations. In fact, the mechanism of cell dispersal is not diffusion. Cells disperse by crawling or proliferation, or are transported in a moving fluid. The use of cellular automata, particle models or cell-based models can overcome this limitation. This paper presents a stochastic cellular automata model to simulate the proliferation, migration and differentiation of cells. These processes are considered as completely stochastic as well as discrete. The model developed was applied to predict the behaviour of in vitro cell cultures performed with adult muscle satellite cells. Moreover, non homogeneous distribution of cells has been observed inside the culture well and, using the above mentioned stochastic cellular automata model, we have been able to predict this heterogeneous cell distribution and compute accurate quantitative results. Differentiation was also incorporated into the computational simulation. The results predicted the myotube formation that typically occurs with adult muscle satellite cells. In conclusion, we have shown how a stochastic cellular automata model can be implemented and is capable of reproducing the in vitro behaviour of adult muscle satellite cells. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
Di Costanzo, Ezio; Giacomello, Alessandro; Messina, Elisa; Natalini, Roberto; Pontrelli, Giuseppe; Rossi, Fabrizio; Smits, Robert; Twarogowska, Monika
2018-03-14
We propose a discrete in continuous mathematical model describing the in vitro growth process of biophsy-derived mammalian cardiac progenitor cells growing as clusters in the form of spheres (Cardiospheres). The approach is hybrid: discrete at cellular scale and continuous at molecular level. In the present model, cells are subject to the self-organizing collective dynamics mechanism and, additionally, they can proliferate and differentiate, also depending on stochastic processes. The two latter processes are triggered and regulated by chemical signals present in the environment. Numerical simulations show the structure and the development of the clustered progenitors and are in a good agreement with the results obtained from in vitro experiments.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Generation of Complex Karstic Conduit Networks with a Hydro-chemical Model
NASA Astrophysics Data System (ADS)
De Rooij, R.; Graham, W. D.
2016-12-01
The discrete-continuum approach is very well suited to simulate flow and solute transport within karst aquifers. Using this approach, discrete one-dimensional conduits are embedded within a three-dimensional continuum representative of the porous limestone matrix. Typically, however, little is known about the geometry of the karstic conduit network. As such the discrete-continuum approach is rarely used for practical applications. It may be argued, however, that the uncertainty associated with the geometry of the network could be handled by modeling an ensemble of possible karst conduit networks within a stochastic framework. We propose to generate stochastically realistic karst conduit networks by simulating the widening of conduits as caused by the dissolution of limestone over geological relevant timescales. We illustrate that advanced numerical techniques permit to solve the non-linear and coupled hydro-chemical processes efficiently, such that relatively large and complex networks can be generated in acceptable time frames. Instead of specifying flow boundary conditions on conduit cells to recharge the network as is typically done in classical speleogenesis models, we specify an effective rainfall rate over the land surface and let model physics determine the amount of water entering the network. This is advantageous since the amount of water entering the network is extremely difficult to reconstruct, whereas the effective rainfall rate may be quantified using paleoclimatic data. Furthermore, we show that poorly known flow conditions may be constrained by requiring a realistic flow field. Using our speleogenesis model we have investigated factors that influence the geometry of simulated conduit networks. We illustrate that our model generates typical branchwork, network and anastomotic conduit systems. Flow, solute transport and water ages in karst aquifers are simulated using a few illustrative networks.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Monte-Carlo Simulations of Drug Delivery on Biofilms
NASA Astrophysics Data System (ADS)
Buldum, Alper; Simpson, Andrew
2013-03-01
The focus of this work is on biofilms that grow in the lungs of cystic fibrosis (CF) patients. A discrete model which describes the nutrient and biomass as discrete particles is created. Diffusion of the nutrient, consumption of the nutrient by microbial particles, and growth and decay of microbial particles are simulated using stochastic processes. Our model extends the complexity of the biofilm system by including the conversion and reversion of living bacteria into a hibernated state, known as persister bacteria. Another new contribution is the inclusion of antimicrobial in two forms: an aqueous solution and encapsulated in biodegradable nanoparticles. The bacteria population growth and spatial variation of drugs and their effectiveness are investigated in this work. Supported by NIH
Discrete-event system simulation on small and medium enterprises productivity improvement
NASA Astrophysics Data System (ADS)
Sulistio, J.; Hidayah, N. A.
2017-12-01
Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.
On the physical realizability of quantum stochastic walks
NASA Astrophysics Data System (ADS)
Taketani, Bruno; Govia, Luke; Schuhmacher, Peter; Wilhelm, Frank
Quantum walks are a promising framework that can be used to both understand and implement quantum information processing tasks. The recently developed quantum stochastic walk combines the concepts of a quantum walk and a classical random walk through open system evolution of a quantum system, and have been shown to have applications in as far reaching fields as artificial intelligence. However, nature puts significant constraints on the kind of open system evolutions that can be realized in a physical experiment. In this work, we discuss the restrictions on the allowed open system evolution, and the physical assumptions underpinning them. We then introduce a way to circumvent some of these restrictions, and simulate a more general quantum stochastic walk on a quantum computer, using a technique we call quantum trajectories on a quantum computer. We finally describe a circuit QED approach to implement discrete time quantum stochastic walks.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
Relativistic analysis of stochastic kinematics
NASA Astrophysics Data System (ADS)
Giona, Massimiliano
2017-10-01
The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.
A stochastic hybrid model for pricing forward-start variance swaps
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah
2017-11-01
Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.
NASA Astrophysics Data System (ADS)
Golinski, M. R.
2006-07-01
Ecologists have observed that environmental noise affects population variance in the logistic equation for one-species growth. Interactions between deterministic and stochastic dynamics in a one-dimensional system result in increased variance in species population density over time. Since natural populations do not live in isolation, the present paper simulates a discrete-time two-species competition model with environmental noise to determine the type of colored population noise generated by extreme conditions in the long-term population dynamics of competing populations. Discrete Fourier analysis is applied to the simulation results and the calculated Hurst exponent ( H) is used to determine how the color of population noise for the two species corresponds to extreme conditions in population dynamics. To interpret the biological meaning of the color of noise generated by the two-species model, the paper determines the color of noise generated by three reference models: (1) A two-dimensional discrete-time white noise model (0⩽ H<1/2); (2) A two-dimensional fractional Brownian motion model (H=1/2); and (3) A two-dimensional discrete-time model with noise for unbounded growth of two uncoupled species (1/2< H⩽1).
Stochastic dynamic modeling of regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.
A hybrid continuous-discrete method for stochastic reaction-diffusion processes.
Lo, Wing-Cheong; Zheng, Likun; Nie, Qing
2016-09-01
Stochastic fluctuations in reaction-diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method.
The cost of conservative synchronization in parallel discrete event simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.
Extending the Multi-level Method for the Simulation of Stochastic Biological Systems.
Lester, Christopher; Baker, Ruth E; Giles, Michael B; Yates, Christian A
2016-08-01
The multi-level method for discrete-state systems, first introduced by Anderson and Higham (SIAM Multiscale Model Simul 10(1):146-179, 2012), is a highly efficient simulation technique that can be used to elucidate statistical characteristics of biochemical reaction networks. A single point estimator is produced in a cost-effective manner by combining a number of estimators of differing accuracy in a telescoping sum, and, as such, the method has the potential to revolutionise the field of stochastic simulation. In this paper, we present several refinements of the multi-level method which render it easier to understand and implement, and also more efficient. Given the substantial and complex nature of the multi-level method, the first part of this work reviews existing literature, with the aim of providing a practical guide to the use of the multi-level method. The second part provides the means for a deft implementation of the technique and concludes with a discussion of a number of open problems.
General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems
1988-01-01
P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans
Waiting time distribution for continuous stochastic systems
NASA Astrophysics Data System (ADS)
Gernert, Robert; Emary, Clive; Klapp, Sabine H. L.
2014-12-01
The waiting time distribution (WTD) is a common tool for analyzing discrete stochastic processes in classical and quantum systems. However, there are many physical examples where the dynamics is continuous and only approximately discrete, or where it is favourable to discuss the dynamics on a discretized and a continuous level in parallel. An example is the hindered motion of particles through potential landscapes with barriers. In the present paper we propose a consistent generalization of the WTD from the discrete case to situations where the particles perform continuous barrier crossing characterized by a finite duration. To this end, we introduce a recipe to calculate the WTD from the Fokker-Planck (Smoluchowski) equation. In contrast to the closely related first passage time distribution (FPTD), which is frequently used to describe continuous processes, the WTD contains information about the direction of motion. As an application, we consider the paradigmatic example of an overdamped particle diffusing through a washboard potential. To verify the approach and to elucidate its numerical implications, we compare the WTD defined via the Smoluchowski equation with data from direct simulation of the underlying Langevin equation and find full consistency provided that the jumps in the Langevin approach are defined properly. Moreover, for sufficiently large energy barriers, the WTD defined via the Smoluchowski equation becomes consistent with that resulting from the analytical solution of a (two-state) master equation model for the short-time dynamics developed previously by us [Phys. Rev. E 86, 061135 (2012), 10.1103/PhysRevE.86.061135]. Thus, our approach "interpolates" between these two types of stochastic motion. We illustrate our approach for both symmetric systems and systems under constant force.
A generic discrete-event simulation model for outpatient clinics in a large public hospital.
Weerawat, Waressara; Pichitlamken, Juta; Subsombat, Peerapong
2013-01-01
The orthopedic outpatient department (OPD) ward in a large Thai public hospital is modeled using Discrete-Event Stochastic (DES) simulation. Key Performance Indicators (KPIs) are used to measure effects across various clinical operations during different shifts throughout the day. By considering various KPIs such as wait times to see doctors, percentage of patients who can see a doctor within a target time frame, and the time that the last patient completes their doctor consultation, bottlenecks are identified and resource-critical clinics can be prioritized. The simulation model quantifies the chronic, high patient congestion that is prevalent amongst Thai public hospitals with very high patient-to-doctor ratios. Our model can be applied across five different OPD wards by modifying the model parameters. Throughout this work, we show how DES models can be used as decision-support tools for hospital management.
NASA Astrophysics Data System (ADS)
Alber, Mark; Chen, Nan; Glimm, Tilmann; Lushnikov, Pavel M.
2006-05-01
The cellular Potts model (CPM) has been used for simulating various biological phenomena such as differential adhesion, fruiting body formation of the slime mold Dictyostelium discoideum, angiogenesis, cancer invasion, chondrogenesis in embryonic vertebrate limbs, and many others. We derive a continuous limit of a discrete one-dimensional CPM with the chemotactic interactions between cells in the form of a Fokker-Planck equation for the evolution of the cell probability density function. This equation is then reduced to the classical macroscopic Keller-Segel model. In particular, all coefficients of the Keller-Segel model are obtained from parameters of the CPM. Theoretical results are verified numerically by comparing Monte Carlo simulations for the CPM with numerics for the Keller-Segel model.
Gursoy, Gamze; Terebus, Anna; Youfang Cao; Jie Liang
2016-08-01
Stochasticity plays important roles in regulation of biochemical reaction networks when the copy numbers of molecular species are small. Studies based on Stochastic Simulation Algorithm (SSA) has shown that a basic reaction system can display stochastic focusing (SF) by increasing the sensitivity of the network as a result of the signal noise. Although SSA has been widely used to study stochastic networks, it is ineffective in examining rare events and this becomes a significant issue when the tails of probability distributions are relevant as is the case of SF. Here we use the ACME method to solve the exact solution of the discrete Chemical Master Equations and to study a network where SF was reported. We showed that the level of SF depends on the degree of the fluctuations of signal molecule. We discovered that signaling noise under certain conditions in the same reaction network can lead to a decrease in the system sensitivities, thus the network can experience stochastic defocusing. These results highlight the fundamental role of stochasticity in biological reaction networks and the need for exact computation of probability landscape of the molecules in the system.
NASA Astrophysics Data System (ADS)
Wang, Xiao-Tian; Wu, Min; Zhou, Ze-Min; Jing, Wei-Shu
2012-02-01
This paper deals with the problem of discrete time option pricing using the fractional long memory stochastic volatility model with transaction costs. Through the 'anchoring and adjustment' argument in a discrete time setting, a European call option pricing formula is obtained.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
Stochastic Forcing for Ocean Uncertainty Prediction
2013-09-30
using the desired dynamics and the fitting of that velocity field to the bathymetry, coasts and discretization for the desired simulation. New algorithms...numerical bias is removed. Pdfs of the forecast errors are shown to capture and evolve non- Gaussian statistics. Comparing the Kullback - Leibler ...advances in collaborative sea exercises of opportunity vi) Strengthen existing and initiate new collaborations with NRL, using and leveraging the MIT
Continuum and discrete approach in modeling biofilm development and structure: a review.
Mattei, M R; Frunzo, L; D'Acunto, B; Pechaud, Y; Pirozzi, F; Esposito, G
2018-03-01
The scientific community has recognized that almost 99% of the microbial life on earth is represented by biofilms. Considering the impacts of their sessile lifestyle on both natural and human activities, extensive experimental activity has been carried out to understand how biofilms grow and interact with the environment. Many mathematical models have also been developed to simulate and elucidate the main processes characterizing the biofilm growth. Two main mathematical approaches for biomass representation can be distinguished: continuum and discrete. This review is aimed at exploring the main characteristics of each approach. Continuum models can simulate the biofilm processes in a quantitative and deterministic way. However, they require a multidimensional formulation to take into account the biofilm spatial heterogeneity, which makes the models quite complicated, requiring significant computational effort. Discrete models are more recent and can represent the typical multidimensional structural heterogeneity of biofilm reflecting the experimental expectations, but they generate computational results including elements of randomness and introduce stochastic effects into the solutions.
2010-01-01
Background The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the “in silico” stochastic event based modeling approach to find the molecular dynamics of the system. Results In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Conclusions Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics. PMID:21143785
Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajal K; Zhang, Chaoyang
2010-12-01
The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the "in silico" stochastic event based modeling approach to find the molecular dynamics of the system. In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics.
Stochastic aspects of one-dimensional discrete dynamical systems: Benford's law.
Snyder, M A; Curry, J H; Dougherty, A M
2001-08-01
Benford's law owes its discovery to the "Grubby Pages Hypothesis," a 19th century observation made by Simon Newcomb that the beginning pages of logarithm books were grubbier than the last few pages, implying that scientists referenced the values toward the front of the books more frequently. If a data set satisfies Benford's law, then it's significant digits will have a logarithmic distribution, which favors smaller significant digits. In this article we demonstrate two ways of creating discrete one-dimensional dynamical systems that satisfy Benford's law. We also develop a numerical simulation methodology that we use to study dynamical systems when analytical results are not readily available.
Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo
2012-12-01
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Stochastic mixed-mode oscillations in a three-species predator-prey model
NASA Astrophysics Data System (ADS)
Sadhu, Susmita; Kuehn, Christian
2018-03-01
The effect of demographic stochasticity, in the form of Gaussian white noise, in a predator-prey model with one fast and two slow variables is studied. We derive the stochastic differential equations (SDEs) from a discrete model. For suitable parameter values, the deterministic drift part of the model admits a folded node singularity and exhibits a singular Hopf bifurcation. We focus on the parameter regime near the Hopf bifurcation, where small amplitude oscillations exist as stable dynamics in the absence of noise. In this regime, the stochastic model admits noise-driven mixed-mode oscillations (MMOs), which capture the intermediate dynamics between two cycles of population outbreaks. We perform numerical simulations to calculate the distribution of the random number of small oscillations between successive spikes for varying noise intensities and distance to the Hopf bifurcation. We also study the effect of noise on a suitable Poincaré map. Finally, we prove that the stochastic model can be transformed into a normal form near the folded node, which can be linked to recent results on the interplay between deterministic and stochastic small amplitude oscillations. The normal form can also be used to study the parameter influence on the noise level near folded singularities.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Stochastic Kuramoto oscillators with discrete phase states.
Jörg, David J
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
Stochastic Kuramoto oscillators with discrete phase states
NASA Astrophysics Data System (ADS)
Jörg, David J.
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
A hybrid continuous-discrete method for stochastic reaction–diffusion processes
Zheng, Likun; Nie, Qing
2016-01-01
Stochastic fluctuations in reaction–diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method. PMID:27703710
A stochastic approach to uncertainty in the equations of MHD kinematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward G., E-mail: egphillips@math.umd.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2015-03-01
The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertaintymore » in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.« less
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Algorithms for adaptive stochastic control for a class of linear systems
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R. V.
1977-01-01
Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.
Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.
Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime
2017-10-01
Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.
Daniel, Colin J.; Sleeter, Benjamin M.; Frid, Leonardo; Fortin, Marie-Josée
2018-01-01
State-and-transition simulation models (STSMs) provide a general framework for forecasting landscape dynamics, including projections of both vegetation and land-use/land-cover (LULC) change. The STSM method divides a landscape into spatially-referenced cells and then simulates the state of each cell forward in time, as a discrete-time stochastic process using a Monte Carlo approach, in response to any number of possible transitions. A current limitation of the STSM method, however, is that all of the state variables must be discrete.Here we present a new approach for extending a STSM, in order to account for continuous state variables, called a state-and-transition simulation model with stocks and flows (STSM-SF). The STSM-SF method allows for any number of continuous stocks to be defined for every spatial cell in the STSM, along with a suite of continuous flows specifying the rates at which stock levels change over time. The change in the level of each stock is then simulated forward in time, for each spatial cell, as a discrete-time stochastic process. The method differs from the traditional systems dynamics approach to stock-flow modelling in that the stocks and flows can be spatially-explicit, and the flows can be expressed as a function of the STSM states and transitions.We demonstrate the STSM-SF method by integrating a spatially-explicit carbon (C) budget model with a STSM of LULC change for the state of Hawai'i, USA. In this example, continuous stocks are pools of terrestrial C, while the flows are the possible fluxes of C between these pools. Importantly, several of these C fluxes are triggered by corresponding LULC transitions in the STSM. Model outputs include changes in the spatial and temporal distribution of C pools and fluxes across the landscape in response to projected future changes in LULC over the next 50 years.The new STSM-SF method allows both discrete and continuous state variables to be integrated into a STSM, including interactions between them. With the addition of stocks and flows, STSMs provide a conceptually simple yet powerful approach for characterizing uncertainties in projections of a wide range of questions regarding landscape change.
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
Modern Workflows for Fracture Rock Hydrogeology
NASA Astrophysics Data System (ADS)
Doe, T.
2015-12-01
Discrete Fracture Network (DFN) is a numerical simulation approach that represents a conducting fracture network using geologically realistic geometries and single-conductor hydraulic and transport properties. In terms of diffusion analogues, equivalent porous media derive from heat conduction in continuous media, while DFN simulation is more similar to electrical flow and diffusion in circuits with discrete pathways. DFN modeling grew out of pioneering work of David Snow in the late 1960s with additional impetus in the 1970's from the development of the development of stochastic approaches for describing of fracture geometric and hydrologic properties. Research in underground test facilities for radioactive waste disposal developed the necessary linkages between characterization technologies and simulation as well as bringing about a hybrid deterministic stochastic approach. Over the past 40 years DFN simulation and characterization methods have moved from the research environment into practical, commercial application. The key geologic, geophysical and hydrologic tools provide the required DFN inputs of conductive fracture intensity, orientation, and transmissivity. Flow logging either using downhole tool or by detailed packer testing identifies the locations of conducting features in boreholes, and image logging provides information on the geology and geometry of the conducting features. Multi-zone monitoring systems isolate the individual conductors, and with subsequent drilling and characterization perturbations help to recognize connectivity and compartmentalization in the fracture network. Tracer tests and core analysis provide critical information on the transport properties especially matrix diffusion unidentified conducting pathways. Well test analyses incorporating flow dimension boundary effects provide further constraint on the conducting geometry of the fracture network.
Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.
2013-02-01
In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation
Daniel Buscombe,; Rubin, David M.
2012-01-01
1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.
Hybrid modeling in biochemical systems theory by means of functional petri nets.
Wu, Jialiang; Voit, Eberhard
2009-02-01
Many biological systems are genuinely hybrids consisting of interacting discrete and continuous components and processes that often operate at different time scales. It is therefore desirable to create modeling frameworks capable of combining differently structured processes and permitting their analysis over multiple time horizons. During the past 40 years, Biochemical Systems Theory (BST) has been a very successful approach to elucidating metabolic, gene regulatory, and signaling systems. However, its foundation in ordinary differential equations has precluded BST from directly addressing problems containing switches, delays, and stochastic effects. In this study, we extend BST to hybrid modeling within the framework of Hybrid Functional Petri Nets (HFPN). First, we show how the canonical GMA and S-system models in BST can be directly implemented in a standard Petri Net framework. In a second step we demonstrate how to account for different types of time delays as well as for discrete, stochastic, and switching effects. Using representative test cases, we validate the hybrid modeling approach through comparative analyses and simulations with other approaches and highlight the feasibility, quality, and efficiency of the hybrid method.
A stochastic discrete optimization model for designing container terminal facilities
NASA Astrophysics Data System (ADS)
Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista
2017-11-01
As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benseghir, Rym, E-mail: benseghirrym@ymail.com, E-mail: benseghirrym@ymail.com; Benchettah, Azzedine, E-mail: abenchettah@hotmail.com; Raynaud de Fitte, Paul, E-mail: prf@univ-rouen.fr
2015-11-30
A stochastic equation system corresponding to the description of the motion of a barotropic viscous gas in a discretized one-dimensional domain with a weight regularizing the density is considered. In [2], the existence of an invariant measure was established for this discretized problem in the stationary case. In this paper, applying a slightly modified version of Khas’minskii’s theorem [5], we generalize this result in the periodic case by proving the existence of a periodic measure for this problem.
Numerical Schemes for Dynamically Orthogonal Equations of Stochastic Fluid and Ocean Flows
2011-11-03
stages of the simulation (see §5.1). Also, because the pdf is discrete, we calculate the mo- ments using the biased estimator CYiYj ≈ 1q ∑ r Yr,iYr,j...independent random variables. For problems that require large p (e.g. non-Gaussian) and large s (e.g. large ocean or fluid simulations ), the number of...Sc = ν̂/K̂ is the Schmidt number which is the ratio of kinematic viscosity ν̂ to molecular diffusivity K̂ for the density field, ĝ′ = ĝ (ρ̂max−ρ̂min
Stable schemes for dissipative particle dynamics with conserved energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, Gabriel, E-mail: stoltz@cermics.enpc.fr
2017-07-01
This article presents a new numerical scheme for the discretization of dissipative particle dynamics with conserved energy. The key idea is to reduce elementary pairwise stochastic dynamics (either fluctuation/dissipation or thermal conduction) to effective single-variable dynamics, and to approximate the solution of these dynamics with one step of a Metropolis–Hastings algorithm. This ensures by construction that no negative internal energies are encountered during the simulation, and hence allows to increase the admissible timesteps to integrate the dynamics, even for systems with small heat capacities. Stability is only limited by the Hamiltonian part of the dynamics, which suggests resorting to multiplemore » timestep strategies where the stochastic part is integrated less frequently than the Hamiltonian one.« less
Combinatoric analysis of heterogeneous stochastic self-assembly.
D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom
2013-09-28
We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Fluid Stochastic Petri Nets: Theory, Applications, and Solution
NASA Technical Reports Server (NTRS)
Horton, Graham; Kulkarni, Vidyadhar G.; Nicol, David M.; Trivedi, Kishor S.
1996-01-01
In this paper we introduce a new class of stochastic Petri nets in which one or more places can hold fluid rather than discrete tokens. We define a class of fluid stochastic Petri nets in such a way that the discrete and continuous portions may affect each other. Following this definition we provide equations for their transient and steady-state behavior. We present several examples showing the utility of the construct in communication network modeling and reliability analysis, and discuss important special cases. We then discuss numerical methods for computing the transient behavior of such nets. Finally, some numerical examples are presented.
Mean-Potential Law in Evolutionary Games
NASA Astrophysics Data System (ADS)
Nałecz-Jawecki, Paweł; Miekisz, Jacek
2018-01-01
The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.
A scalable moment-closure approximation for large-scale biochemical reaction networks
Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan
2017-01-01
Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983
Distributed delays in a hybrid model of tumor-immune system interplay.
Caravagna, Giulio; Graudenzi, Alex; d'Onofrio, Alberto
2013-02-01
A tumor is kinetically characterized by the presence of multiple spatio-temporal scales in which its cells interplay with, for instance, endothelial cells or Immune system effectors, exchanging various chemical signals. By its nature, tumor growth is an ideal object of hybrid modeling where discrete stochastic processes model low-numbers entities, and mean-field equations model abundant chemical signals. Thus, we follow this approach to model tumor cells, effector cells and Interleukin-2, in order to capture the Immune surveillance effect. We here present a hybrid model with a generic delay kernel accounting that, due to many complex phenomena such as chemical transportation and cellular differentiation, the tumor-induced recruitment of effectors exhibits a lag period. This model is a Stochastic Hybrid Automata and its semantics is a Piecewise Deterministic Markov process where a two-dimensional stochastic process is interlinked to a multi-dimensional mean-field system. We instantiate the model with two well-known weak and strong delay kernels and perform simulations by using an algorithm to generate trajectories of this process. Via simulations and parametric sensitivity analysis techniques we (i) relate tumor mass growth with the two kernels, we (ii) measure the strength of the Immune surveillance in terms of probability distribution of the eradication times, and (iii) we prove, in the oscillatory regime, the existence of a stochastic bifurcation resulting in delay-induced tumor eradication.
Models for discrete-time self-similar vector processes with application to network traffic
NASA Astrophysics Data System (ADS)
Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh
2003-07-01
The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach
NASA Astrophysics Data System (ADS)
Kim, Changho; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.; Donev, Aleksandar
2017-03-01
We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. By comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.
Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach
Kim, Changho; Nonaka, Andy; Bell, John B.; ...
2017-03-24
Here, we develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules,more » to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. Furthermore, by comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.« less
Stochastic Analysis of Reaction–Diffusion Processes
Hu, Jifeng; Kang, Hye-Won
2013-01-01
Reaction and diffusion processes are used to model chemical and biological processes over a wide range of spatial and temporal scales. Several routes to the diffusion process at various levels of description in time and space are discussed and the master equation for spatially discretized systems involving reaction and diffusion is developed. We discuss an estimator for the appropriate compartment size for simulating reaction–diffusion systems and introduce a measure of fluctuations in a discretized system. We then describe a new computational algorithm for implementing a modified Gillespie method for compartmental systems in which reactions are aggregated into equivalence classes and computational cells are searched via an optimized tree structure. Finally, we discuss several examples that illustrate the issues that have to be addressed in general systems. PMID:23719732
Discrete-time Markovian stochastic Petri nets
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco
1995-01-01
We revisit and extend the original definition of discrete-time stochastic Petri nets, by allowing the firing times to have a 'defective discrete phase distribution'. We show that this formalism still corresponds to an underlying discrete-time Markov chain. The structure of the state for this process describes both the marking of the Petri net and the phase of the firing time for each transition, resulting in a large state space. We then modify the well-known power method to perform a transient analysis even when the state space is infinite, subject to the condition that only a finite number of states can be reached in a finite amount of time. Since the memory requirements might still be excessive, we suggest a bounding technique based on truncation.
Mean-Potential Law in Evolutionary Games.
Nałęcz-Jawecki, Paweł; Miękisz, Jacek
2018-01-12
The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.
The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography
NASA Astrophysics Data System (ADS)
Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng
2015-04-01
Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.
Improved result on stability analysis of discrete stochastic neural networks with time delay
NASA Astrophysics Data System (ADS)
Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng
2009-04-01
This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au
We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less
Bayesian inference for dynamic transcriptional regulation; the Hes1 system as a case study.
Heron, Elizabeth A; Finkenstädt, Bärbel; Rand, David A
2007-10-01
In this study, we address the problem of estimating the parameters of regulatory networks and provide the first application of Markov chain Monte Carlo (MCMC) methods to experimental data. As a case study, we consider a stochastic model of the Hes1 system expressed in terms of stochastic differential equations (SDEs) to which rigorous likelihood methods of inference can be applied. When fitting continuous-time stochastic models to discretely observed time series the lengths of the sampling intervals are important, and much of our study addresses the problem when the data are sparse. We estimate the parameters of an autoregulatory network providing results both for simulated and real experimental data from the Hes1 system. We develop an estimation algorithm using MCMC techniques which are flexible enough to allow for the imputation of latent data on a finer time scale and the presence of prior information about parameters which may be informed from other experiments as well as additional measurement error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Stochastic cellular automata model for stock market dynamics
NASA Astrophysics Data System (ADS)
Bartolozzi, M.; Thomas, A. W.
2004-04-01
In the present work we introduce a stochastic cellular automata model in order to simulate the dynamics of the stock market. A direct percolation method is used to create a hierarchy of clusters of active traders on a two-dimensional grid. Active traders are characterized by the decision to buy, σi (t)=+1 , or sell, σi (t)=-1 , a stock at a certain discrete time step. The remaining cells are inactive, σi (t)=0 . The trading dynamics is then determined by the stochastic interaction between traders belonging to the same cluster. Extreme, intermittent events, such as crashes or bubbles, are triggered by a phase transition in the state of the bigger clusters present on the grid, where almost all the active traders come to share the same spin orientation. Most of the stylized aspects of the financial market time series, including multifractal proprieties, are reproduced by the model. A direct comparison is made with the daily closures of the S&P500 index.
Intermittency inhibited by transport: An exactly solvable model
NASA Astrophysics Data System (ADS)
Zanette, Damián H.
1994-04-01
Transport is incorporated in a discrete-time stochastic model of a system undergoing autocatalytic reactions of the type A-->2A and A-->0, whose population field is known to exhibit spatiotemporal intermittency. The temporal evolution is exactly solved, and it is shown that if the transport process is strong enough, intermittency is inhibited. This inhibition is nonuniform, in the sense that, as transport is strengthened, low-order population moments are affected before the high-order ones. Numerical simulations are presented to support the analytical results.
Improving Project Management with Simulation and Completion Distribution Functions
NASA Technical Reports Server (NTRS)
Cates, Grant R.
2004-01-01
Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.
Liang, Jie; Qian, Hong
2010-01-01
Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand “complex behavior” and complexity theory, and from which important biological insight can be gained. PMID:24999297
Liang, Jie; Qian, Hong
2010-01-01
Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.
Sedwards, Sean; Mazza, Tommaso
2007-10-15
Compartments and membranes are the basis of cell topology and more than 30% of the human genome codes for membrane proteins. While it is possible to represent compartments and membrane proteins in a nominal way with many mathematical formalisms used in systems biology, few, if any, explicitly model the topology of the membranes themselves. Discrete stochastic simulation potentially offers the most accurate representation of cell dynamics. Since the details of every molecular interaction in a pathway are often not known, the relationship between chemical species in not necessarily best described at the lowest level, i.e. by mass action. Simulation is a form of computer-aided analysis, relying on human interpretation to derive meaning. To improve efficiency and gain meaning in an automatic way, it is necessary to have a formalism based on a model which has decidable properties. We present Cyto-Sim, a stochastic simulator of membrane-enclosed hierarchies of biochemical processes, where the membranes comprise an inner, outer and integral layer. The underlying model is based on formal language theory and has been shown to have decidable properties (Cavaliere and Sedwards, 2006), allowing formal analysis in addition to simulation. The simulator provides variable levels of abstraction via arbitrary chemical kinetics which link to ordinary differential equations. In addition to its compact native syntax, Cyto-Sim currently supports models described as Petri nets, can import all versions of SBML and can export SBML and MATLAB m-files. Cyto-Sim is available free, either as an applet or a stand-alone Java program via the web page (http://www.cosbi.eu/Rpty_Soft_CytoSim.php). Other versions can be made available upon request.
dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia
DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less
dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport
Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia; ...
2015-11-01
DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644
Zheng, Weihua; Gallicchio, Emilio; Deng, Nanjie; Andrec, Michael; Levy, Ronald M.
2011-01-01
We present a new approach to study a multitude of folding pathways and different folding mechanisms for the 20-residue mini-protein Trp-Cage using the combined power of replica exchange molecular dynamics (REMD) simulations for conformational sampling, Transition Path Theory (TPT) for constructing folding pathways and stochastic simulations for sampling the pathways in a high dimensional structure space. REMD simulations of Trp-Cage with 16 replicas at temperatures between 270K and 566K are carried out with an all-atom force field (OPLSAA) and an implicit solvent model (AGBNP). The conformations sampled from all temperatures are collected. They form a discretized state space that can be used to model the folding process. The equilibrium population for each state at a target temperature can be calculated using the Weighted-Histogram-Analysis Method (WHAM). By connecting states with similar structures and creating edges satisfying detailed balance conditions, we construct a kinetic network that preserves the equilibrium population distribution of the state space. After defining the folded and unfolded macrostates, committor probabilities (Pfold) are calculated by solving a set of linear equations for each node in the network and pathways are extracted together with their fluxes using the TPT algorithm. By clustering the pathways into folding “tubes”, a more physically meaningful picture of the diversity of folding routes emerges. Stochastic simulations are carried out on the network and a procedure is developed to project sampled trajectories onto the folding tubes. The fluxes through the folding tubes calculated from the stochastic trajectories are in good agreement with the corresponding values obtained from the TPT analysis. The temperature dependence of the ensemble of Trp-Cage folding pathways is investigated. Above the folding temperature, a large number of diverse folding pathways with comparable fluxes flood the energy landscape. At low temperature, however, the folding transition is dominated by only a few localized pathways. PMID:21254767
Zheng, Weihua; Gallicchio, Emilio; Deng, Nanjie; Andrec, Michael; Levy, Ronald M
2011-02-17
We present a new approach to study a multitude of folding pathways and different folding mechanisms for the 20-residue mini-protein Trp-Cage using the combined power of replica exchange molecular dynamics (REMD) simulations for conformational sampling, transition path theory (TPT) for constructing folding pathways, and stochastic simulations for sampling the pathways in a high dimensional structure space. REMD simulations of Trp-Cage with 16 replicas at temperatures between 270 and 566 K are carried out with an all-atom force field (OPLSAA) and an implicit solvent model (AGBNP). The conformations sampled from all temperatures are collected. They form a discretized state space that can be used to model the folding process. The equilibrium population for each state at a target temperature can be calculated using the weighted-histogram-analysis method (WHAM). By connecting states with similar structures and creating edges satisfying detailed balance conditions, we construct a kinetic network that preserves the equilibrium population distribution of the state space. After defining the folded and unfolded macrostates, committor probabilities (P(fold)) are calculated by solving a set of linear equations for each node in the network and pathways are extracted together with their fluxes using the TPT algorithm. By clustering the pathways into folding "tubes", a more physically meaningful picture of the diversity of folding routes emerges. Stochastic simulations are carried out on the network, and a procedure is developed to project sampled trajectories onto the folding tubes. The fluxes through the folding tubes calculated from the stochastic trajectories are in good agreement with the corresponding values obtained from the TPT analysis. The temperature dependence of the ensemble of Trp-Cage folding pathways is investigated. Above the folding temperature, a large number of diverse folding pathways with comparable fluxes flood the energy landscape. At low temperature, however, the folding transition is dominated by only a few localized pathways.
A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.
Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L
2016-10-01
Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.
Buckee, Caroline O; Recker, Mario; Watkins, Eleanor R; Gupta, Sunetra
2011-09-13
Many highly diverse pathogen populations appear to exist stably as discrete antigenic types despite evidence of genetic exchange. It has been shown that this may arise as a consequence of immune selection on pathogen populations, causing them to segregate permanently into discrete nonoverlapping subsets of antigenic variants to minimize competition for available hosts. However, discrete antigenic strain structure tends to break down under conditions where there are unequal numbers of allelic variants at each locus. Here, we show that the inclusion of stochastic processes can lead to the stable recovery of discrete strain structure through loss of certain alleles. This explains how pathogen populations may continue to behave as independently transmitted strains despite inevitable asymmetries in allelic diversity of major antigens. We present evidence for this type of structuring across global meningococcal isolates in three diverse antigens that are currently being developed as vaccine components.
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Discrete, continuous, and stochastic models of protein sorting in the Golgi apparatus
Gong, Haijun; Guo, Yusong; Linstedt, Adam
2017-01-01
The Golgi apparatus plays a central role in processing and sorting proteins and lipids in eukaryotic cells. Golgi compartments constantly exchange material with each other and with other cellular components, allowing them to maintain and reform distinct identities despite dramatic changes in structure and size during cell division, development, and osmotic stress. We have developed three minimal models of membrane and protein exchange in the Golgi—a discrete, stochastic model, a continuous ordinary differential equation model, and a continuous stochastic differential equation model—each based on two fundamental mechanisms: vesicle-coat-mediated selective concentration of cargoes and soluble N-ethylmaleimide-sensitive factor attachment protein receptor SNARE proteins during vesicle formation and SNARE-mediated selective fusion of vesicles. By exploring where the models differ, we hope to discover whether the discrete, stochastic nature of vesicle-mediated transport is likely to have appreciable functional consequences for the Golgi. All three models show similar ability to restore and maintain distinct identities over broad parameter ranges. They diverge, however, in conditions corresponding to collapse and reassembly of the Golgi. The results suggest that a continuum model provides a good description of Golgi maintenance but that considering the discrete nature of vesicle-based traffic is important to understanding assembly and disassembly of the Golgi. Experimental analysis validates a prediction of the models that altering guanine nucleotide exchange factor expression levels will modulate Golgi size. PMID:20365406
About the discrete-continuous nature of a hematopoiesis model for Chronic Myeloid Leukemia.
Gaudiano, Marcos E; Lenaerts, Tom; Pacheco, Jorge M
2016-12-01
Blood of mammals is composed of a variety of cells suspended in a fluid medium known as plasma. Hematopoiesis is the biological process of birth, replication and differentiation of blood cells. Despite of being essentially a stochastic phenomenon followed by a huge number of discrete entities, blood formation has naturally an associated continuous dynamics, because the cellular populations can - on average - easily be described by (e.g.) differential equations. This deterministic dynamics by no means contemplates some important stochastic aspects related to abnormal hematopoiesis, that are especially significant for studying certain blood cancer deceases. For instance, by mere stochastic competition against the normal cells, leukemic cells sometimes do not reach the population thereshold needed to kill the organism. Of course, a pure discrete model able to follow the stochastic paths of billons of cells is computationally impossible. In order to avoid this difficulty, we seek a trade-off between the computationally feasible and the biologically realistic, deriving an equation able to size conveniently both the discrete and continuous parts of a model for hematopoiesis in terrestrial mammals, in the context of Chronic Myeloid Leukemia. Assuming the cancer is originated from a single stem cell inside of the bone marrow, we also deduce a theoretical formula for the probability of non-diagnosis as a function of the mammal average adult mass. In addition, this work cellular dynamics analysis may shed light on understanding Peto's paradox, which is shown here as an emergent property of the discrete-continuous nature of the system. Copyright © 2016 Elsevier Inc. All rights reserved.
Constant pressure and temperature discrete-time Langevin molecular dynamics
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Farago, Oded
2014-11-01
We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.
NASA Astrophysics Data System (ADS)
Santillán, Moisés; Qian, Hong
2013-01-01
We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
Path integrals and large deviations in stochastic hybrid systems.
Bressloff, Paul C; Newby, Jay M
2014-04-01
We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovsky, Zachary Kyle; Denman, Matthew R.
It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less
Optimal Stochastic Modeling and Control of Flexible Structures
1988-09-01
1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic
Optimal Control of Stochastic Systems Driven by Fractional Brownian Motions
2014-10-09
problems for stochastic partial differential equations driven by fractional Brownian motions are explicitly solved. For the control of a continuous time...linear systems with Brownian motion or a discrete time linear system with a white Gaussian noise and costs 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 stochastic optimal control, fractional Brownian motion , stochastic
Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks
NASA Astrophysics Data System (ADS)
Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.
2013-12-01
Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus, computational nodes lying on fracture intersections have four associated velocities, one on each side of the intersection in each fracture plane [2]. This information is used to route particles arriving at the fracture intersection to the appropriate downstream fracture segment. Verified for small DFNs, the new simulation capability allows accurate particle tracking on more realistic representations of fractured rock sites. In the current work we focus on travel time statistics and spatial dispersion and show numerical results in DFNs of different sizes, fracture densities, and transmissivity distributions. [1] Hyman J.D., Gable C.W., Painter S.L., Automated meshing of stochastically generated discrete fracture networks, Abstract H33G-1403, 2011 AGU, San Francisco, CA, 5-9 Dec. [2] N. Makedonska, S. L. Painter, T.-L. Hsieh, Q.M. Bui, and C. W. Gable., Development and verification of a new particle tracking capability for modeling radionuclide transport in discrete fracture networks, Abstract, 2013 IHLRWM, Albuquerque, NM, Apr. 28 - May 3. [3] Lichtner, P.C., Hammond, G.E., Bisht, G., Karra, S., Mills, R.T., and Kumar, J. (2013) PFLOTRAN User's Manual: A Massively Parallel Reactive Flow Code. [4] Painter S.L., Gable C.W., Kelkar S., Pathline tracing on fully unstructured control-volume grids, Computational Geosciences, 16 (4), 2012, 1125-1134.
PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine
NASA Astrophysics Data System (ADS)
D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.
2012-04-01
PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.
Review of stochastic hybrid systems with applications in biological systems modeling and analysis.
Li, Xiangfang; Omotere, Oluwaseyi; Qian, Lijun; Dougherty, Edward R
2017-12-01
Stochastic hybrid systems (SHS) have attracted a lot of research interests in recent years. In this paper, we review some of the recent applications of SHS to biological systems modeling and analysis. Due to the nature of molecular interactions, many biological processes can be conveniently described as a mixture of continuous and discrete phenomena employing SHS models. With the advancement of SHS theory, it is expected that insights can be obtained about biological processes such as drug effects on gene regulation. Furthermore, combining with advanced experimental methods, in silico simulations using SHS modeling techniques can be carried out for massive and rapid verification or falsification of biological hypotheses. The hope is to substitute costly and time-consuming in vitro or in vivo experiments or provide guidance for those experiments and generate better hypotheses.
Effect of Stochastic Charge Fluctuations on Dust Dynamics
NASA Astrophysics Data System (ADS)
Matthews, Lorin; Shotorban, Babak; Hyde, Truell
2017-10-01
The charging of particles in a plasma environment occurs through the collection of electrons and ions on the particle surface. Depending on the particle size and the plasma density, the standard deviation of the number of collected elementary charges, which fluctuates due to the randomness in times of collisions with electrons or ions, may be a significant fraction of the equilibrium charge. We use a discrete stochastic charging model to simulate the variations in charge across the dust surface as well as in time. The resultant asymmetric particle potentials, even for spherical grains, has a significant impact on the particle coagulation rate as well as the structure of the resulting aggregates. We compare the effects on particle collisions and growth in typical laboratory and astrophysical plasma environments. This work was supported by the National Science Foundation under Grant PHY-1414523.
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.
2014-11-01
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minier, Jean-Pierre, E-mail: Jean-Pierre.Minier@edf.fr; Chibbaro, Sergio; Pope, Stephen B.
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangianmore » stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.« less
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan
This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Brownian aggregation rate of colloid particles with several active sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekrasov, Vyacheslav M.; Yurkin, Maxim A.; Chernyshev, Andrei V., E-mail: chern@ns.kinetics.nsc.ru
2014-08-14
We theoretically analyze the aggregation kinetics of colloid particles with several active sites. Such particles (so-called “patchy particles”) are well known as chemically anisotropic reactants, but the corresponding rate constant of their aggregation has not yet been established in a convenient analytical form. Using kinematic approximation for the diffusion problem, we derived an analytical formula for the diffusion-controlled reaction rate constant between two colloid particles (or clusters) with several small active sites under the following assumptions: the relative translational motion is Brownian diffusion, and the isotropic stochastic reorientation of each particle is Markovian and arbitrarily correlated. This formula was shownmore » to produce accurate results in comparison with more sophisticated approaches. Also, to account for the case of a low number of active sites per particle we used Monte Carlo stochastic algorithm based on Gillespie method. Simulations showed that such discrete model is required when this number is less than 10. Finally, we applied the developed approach to the simulation of immunoagglutination, assuming that the formed clusters have fractal structure.« less
Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.
Hybrid discrete/continuum algorithms for stochastic reaction networks
Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...
2014-10-22
Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less
Theory of relativistic Brownian motion: the (1+3) -dimensional case.
Dunkel, Jörn; Hänggi, Peter
2005-09-01
A theory for (1+3) -dimensional relativistic Brownian motion under the influence of external force fields is put forward. Starting out from a set of relativistically covariant, but multiplicative Langevin equations we describe the relativistic stochastic dynamics of a forced Brownian particle. The corresponding Fokker-Planck equations are studied in the laboratory frame coordinates. In particular, the stochastic integration prescription--i.e., the discretization rule dilemma--is elucidated (prepoint discretization rule versus midpoint discretization rule versus postpoint discretization rule). Remarkably, within our relativistic scheme we find that the postpoint rule (or the transport form) yields the only Fokker-Planck dynamics from which the relativistic Maxwell-Boltzmann statistics is recovered as the stationary solution. The relativistic velocity effects become distinctly more pronounced by going from one to three spatial dimensions. Moreover, we present numerical results for the asymptotic mean-square displacement of a free relativistic Brownian particle moving in 1+3 dimensions.
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li
2018-05-01
This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M
2008-06-01
Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.
García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G
2017-08-01
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Stochastic dynamics of time correlation in complex systems with discrete time
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail
2000-11-01
In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Simulating advanced life support systems to test integrated control approaches
NASA Astrophysics Data System (ADS)
Kortenkamp, D.; Bell, S.
Simulations allow for testing of life support control approaches before hardware is designed and built. Simulations also allow for the safe exploration of alternative control strategies during life support operation. As such, they are an important component of any life support research program and testbed. This paper describes a specific advanced life support simulation being created at NASA Johnson Space Center. It is a discrete-event simulation that is dynamic and stochastic. It simulates all major components of an advanced life support system, including crew (with variable ages, weights and genders), biomass production (with scalable plantings of ten different crops), water recovery, air revitalization, food processing, solid waste recycling and energy production. Each component is modeled as a producer of certain resources and a consumer of certain resources. The control system must monitor (via sensors) and control (via actuators) the flow of resources throughout the system to provide life support functionality. The simulation is written in an object-oriented paradigm that makes it portable, extensible and reconfigurable.
Projection scheme for a reflected stochastic heat equation with additive noise
NASA Astrophysics Data System (ADS)
Higa, Arturo Kohatsu; Pettersson, Roger
2005-02-01
We consider a projection scheme as a numerical solution of a reflected stochastic heat equation driven by a space-time white noise. Convergence is obtained via a discrete contraction principle and known convergence results for numerical solutions of parabolic variational inequalities.
Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.
Collective behavior of coupled nonuniform stochastic oscillators
NASA Astrophysics Data System (ADS)
Assis, Vladimir R. V.; Copelli, Mauro
2012-02-01
Theoretical studies of synchronization are usually based on models of coupled phase oscillators which, when isolated, have constant angular frequency. Stochastic discrete versions of these uniform oscillators have also appeared in the literature, with equal transition rates among the states. Here we start from the model recently introduced by Wood et al. [K. Wood, C. Van den Broeck, R. Kawai, K. Lindenberg, Universality of synchrony: critical behavior in a discrete model of stochastic phase-coupled oscillators, Phys. Rev. Lett. 96 (2006) 145701], which has a collectively synchronized phase, and parametrically modify the phase-coupled oscillators to render them (stochastically) nonuniform. We show that, depending on the nonuniformity parameter 0≤α≤1, a mean field analysis predicts the occurrence of several phase transitions. In particular, the phase with collective oscillations is stable for the complete graph only for α≤α‧<1. At α=1 the oscillators become excitable elements and the system has an absorbing state. In the excitable regime, no collective oscillations were found in the model.
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
Discrete and broadband electron acceleration in Jupiter's powerful aurora.
Mauk, B H; Haggerty, D K; Paranicas, C; Clark, G; Kollmann, P; Rymer, A M; Bolton, S J; Levin, S M; Adriani, A; Allegrini, F; Bagenal, F; Bonfond, B; Connerney, J E P; Gladstone, G R; Kurth, W S; McComas, D J; Valek, P
2017-09-06
The most intense auroral emissions from Earth's polar regions, called discrete for their sharply defined spatial configurations, are generated by a process involving coherent acceleration of electrons by slowly evolving, powerful electric fields directed along the magnetic field lines that connect Earth's space environment to its polar regions. In contrast, Earth's less intense auroras are generally caused by wave scattering of magnetically trapped populations of hot electrons (in the case of diffuse aurora) or by the turbulent or stochastic downward acceleration of electrons along magnetic field lines by waves during transitory periods (in the case of broadband or Alfvénic aurora). Jupiter's relatively steady main aurora has a power density that is so much larger than Earth's that it has been taken for granted that it must be generated primarily by the discrete auroral process. However, preliminary in situ measurements of Jupiter's auroral regions yielded no evidence of such a process. Here we report observations of distinct, high-energy, downward, discrete electron acceleration in Jupiter's auroral polar regions. We also infer upward magnetic-field-aligned electric potentials of up to 400 kiloelectronvolts, an order of magnitude larger than the largest potentials observed at Earth. Despite the magnitude of these upward electric potentials and the expectations from observations at Earth, the downward energy flux from discrete acceleration is less at Jupiter than that caused by broadband or stochastic processes, with broadband and stochastic characteristics that are substantially different from those at Earth.
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
2012-01-01
Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658
Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.
Allen, Edward J
2014-06-01
Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
Solutions of burnt-bridge models for molecular motor transport.
Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B; Artyomov, Maxim N
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called "bridges"), is investigated theoretically by analyzing discrete-state stochastic "burnt-bridge" models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed ("burned") with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Solutions of burnt-bridge models for molecular motor transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander Yu.; Pronina, Ekaterina; Kolomeisky, Anatoly B.; Artyomov, Maxim N.
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called “bridges”), is investigated theoretically by analyzing discrete-state stochastic “burnt-bridge” models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (“burned”) with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Nishiura, Hiroshi
2011-02-16
Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
NASA Astrophysics Data System (ADS)
Fadakar Alghalandis, Younes
2017-05-01
Rapidly growing topic, the discrete fracture network engineering (DFNE), has already attracted many talents from diverse disciplines in academia and industry around the world to challenge difficult problems related to mining, geothermal, civil, oil and gas, water and many other projects. Although, there are few commercial software capable of providing some useful functionalities fundamental for DFNE, their costs, closed code (black box) distributions and hence limited programmability and tractability encouraged us to respond to this rising demand with a new solution. This paper introduces an open source comprehensive software package for stochastic modeling of fracture networks in two- and three-dimension in discrete formulation. Functionalities included are geometric modeling (e.g., complex polygonal fracture faces, and utilizing directional statistics), simulations, characterizations (e.g., intersection, clustering and connectivity analyses) and applications (e.g., fluid flow). The package is completely written in Matlab scripting language. Significant efforts have been made to bring maximum flexibility to the functions in order to solve problems in both two- and three-dimensions in an easy and united way that is suitable for beginners, advanced and experienced users.
Multi-level methods and approximating distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less
Space-Time Discrete KPZ Equation
NASA Astrophysics Data System (ADS)
Cannizzaro, G.; Matetski, K.
2018-03-01
We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
On Nash Equilibria in Stochastic Games
2003-10-01
Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely
Computational study of noise in a large signal transduction network.
Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena
2011-06-21
Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.
Stochastic molecular model of enzymatic hydrolysis of cellulose for ethanol production
2013-01-01
Background During cellulosic ethanol production, cellulose hydrolysis is achieved by synergistic action of cellulase enzyme complex consisting of multiple enzymes with different mode of actions. Enzymatic hydrolysis of cellulose is one of the bottlenecks in the commercialization of the process due to low hydrolysis rates and high cost of enzymes. A robust hydrolysis model that can predict hydrolysis profile under various scenarios can act as an important forecasting tool to improve the hydrolysis process. However, multiple factors affecting hydrolysis: cellulose structure and complex enzyme-substrate interactions during hydrolysis make it diffucult to develop mathematical kinetic models that can simulate hydrolysis in presence of multiple enzymes with high fidelity. In this study, a comprehensive hydrolysis model based on stochastic molecular modeling approch in which each hydrolysis event is translated into a discrete event is presented. The model captures the structural features of cellulose, enzyme properties (mode of actions, synergism, inhibition), and most importantly dynamic morphological changes in the substrate that directly affect the enzyme-substrate interactions during hydrolysis. Results Cellulose was modeled as a group of microfibrils consisting of elementary fibrils bundles, where each elementary fibril was represented as a three dimensional matrix of glucose molecules. Hydrolysis of cellulose was simulated based on Monte Carlo simulation technique. Cellulose hydrolysis results predicted by model simulations agree well with the experimental data from literature. Coefficients of determination for model predictions and experimental values were in the range of 0.75 to 0.96 for Avicel hydrolysis by CBH I action. Model was able to simulate the synergistic action of multiple enzymes during hydrolysis. The model simulations captured the important experimental observations: effect of structural properties, enzyme inhibition and enzyme loadings on the hydrolysis and degree of synergism among enzymes. Conclusions The model was effective in capturing the dynamic behavior of cellulose hydrolysis during action of individual as well as multiple cellulases. Simulations were in qualitative and quantitative agreement with experimental data. Several experimentally observed phenomena were simulated without the need for any additional assumptions or parameter changes and confirmed the validity of using the stochastic molecular modeling approach to quantitatively and qualitatively describe the cellulose hydrolysis. PMID:23638989
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
Modeling disease transmission near eradication: An equation free approach
NASA Astrophysics Data System (ADS)
Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan
2015-01-01
Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.
Intimate Partner Violence: A Stochastic Model.
Guidi, Elisa; Meringolo, Patrizia; Guazzini, Andrea; Bagnoli, Franco
2017-01-01
Intimate partner violence (IPV) has been a well-studied problem in the past psychological literature, especially through its classical methodology such as qualitative, quantitative and mixed methods. This article introduces two basic stochastic models as an alternative approach to simulate the short and long-term dynamics of a couple at risk of IPV. In both models, the members of the couple may assume a finite number of states, updating them in a probabilistic way at discrete time steps. After defining the transition probabilities, we first analyze the evolution of the couple in isolation and then we consider the case in which the individuals modify their behavior depending on the perceived violence from other couples in their environment or based on the perceived informal social support. While high perceived violence in other couples may converge toward the own presence of IPV by means a gender-specific transmission, the gender differences fade-out in the case of received informal social support. Despite the simplicity of the two stochastic models, they generate results which compare well with past experimental studies about IPV and they give important practical implications for prevention intervention in this field. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.
NASA Astrophysics Data System (ADS)
Jin, Wang; Penington, Catherine J.; McCue, Scott W.; Simpson, Matthew J.
2016-10-01
Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of r≥slant 1 concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, λ C(1-C), where λ is the proliferation rate, is generalised to a universal growth function, λ C f(C). Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.
Jin, Wang; Penington, Catherine J; McCue, Scott W; Simpson, Matthew J
2016-10-07
Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of [Formula: see text] concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, [Formula: see text], where λ is the proliferation rate, is generalised to a universal growth function, [Formula: see text]. Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.
Pandiselvi, S; Raja, R; Cao, Jinde; Rajchakit, G; Ahmad, Bashir
2018-01-01
This work predominantly labels the problem of approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays. Here we design a linear estimator in such a way that the absorption of mRNA and protein can be approximated via known measurement outputs. By utilizing a Lyapunov-Krasovskii functional and some stochastic analysis execution, we obtain the stability formula of the estimation error systems in the structure of linear matrix inequalities under which the estimation error dynamics is robustly exponentially stable. Further, the obtained conditions (in the form of LMIs) can be effortlessly solved by some available software packages. Moreover, the specific expression of the desired estimator is also shown in the main section. Finally, two mathematical illustrative examples are accorded to show the advantage of the proposed conceptual results.
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Comparing root architectural models
NASA Astrophysics Data System (ADS)
Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan
2017-04-01
Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.
Fluctuations and Noise in Stochastic Spread of Respiratory Infection Epidemics in Social Networks
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Emelyanova, Natalya; Demin, Sergey; Gafarov, Fail; Hänggi, Peter; Yulmetyeva, Dinara
2003-05-01
For the analysis of epidemic and disease dynamics complexity, it is necessary to understand the basic principles and notions of its spreading in long-time memory media. Here we considering the problem from a theoretical and practical viewpoint, presenting the quantitative evidence confirming the existence of stochastic long-range memory and robust chaos in a real time series of respiratory infections of human upper respiratory track. In this work we present a new statistical method of analyzing the spread of grippe and acute respiratory track infections epidemic process of human upper respiratory track by means of the theory of discrete non-Markov stochastic processes. We use the results of our recent theory (Phys. Rev. E 65, 046107 (2002)) for the study of statistical effects of memory in real data series, describing the epidemic dynamics of human acute respiratory track infections and grippe. The obtained results testify to an opportunity of the strict quantitative description of the regular and stochastic components in epidemic dynamics of social networks with a view to time discreteness and effects of statistical memory.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2016-02-01
We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less
A stochastic-field description of finite-size spiking neural networks
Longtin, André
2017-01-01
Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Discrete-element simulation of sea-ice mechanics: Contact mechanics and granular jamming
NASA Astrophysics Data System (ADS)
Damsgaard, A.; Adcroft, A.; Sergienko, O. V.; Stern, A. A.
2017-12-01
Lagrangian models of sea-ice dynamics offer several advantages to Eulerian continuum methods. Spatial discretization on the ice-floe scale is natural for Lagrangian models, which additionally offer the convenience of being able to handle arbitrary sea-ice concentrations. This is likely to improve model performance in ice-marginal zones with strong advection. Furthermore, phase transitions in granular rheology around the jamming limit, such as observed when sea ice moves through geometric confinements, includes sharp thresholds in effective viscosity which are typically ignored in Eulerian models. Granular jamming is a stochastic process dependent on having the right grains in the right place at the right time, and the jamming likelihood over time can be described by a probabilistic model. Difficult to parameterize in continuum formulations, jamming occurs naturally in dense granular systems simulated in a Lagrangian framework, and is a very relevant process controlling sea-ice transport through narrow straits. We construct a flexible discrete-element framework for simulating Lagrangian sea-ice dynamics at the ice-floe scale, forced by ocean and atmosphere velocity fields. Using this framework, we demonstrate that frictionless contact models based on compressive stiffness alone are unlikely to jam, and describe two different approaches based on friction and tensile strength which both result in increased bulk shear strength of the granular assemblage. The frictionless but cohesive contact model, with certain tensile strength values, can display jamming behavior which on the large scale is very similar to a more complex and realistic model with contact friction and ice-floe rotation.
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
2011-01-01
Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Delay chemical master equation: direct and closed-form solutions
Leier, Andre; Marquez-Lago, Tatiana T.
2015-01-01
The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived. PMID:26345616
Delay chemical master equation: direct and closed-form solutions.
Leier, Andre; Marquez-Lago, Tatiana T
2015-07-08
The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived.
Modelling the interaction between flooding events and economic growth
NASA Astrophysics Data System (ADS)
Grames, J.; Prskawetz, A.; Grass, D.; Blöschl, G.
2015-06-01
Socio-hydrology describes the interaction between the socio-economy and water. Recent models analyze the interplay of community risk-coping culture, flooding damage and economic growth (Di Baldassarre et al., 2013; Viglione et al., 2014). These models descriptively explain the feedbacks between socio-economic development and natural disasters like floods. Contrary to these descriptive models, our approach develops an optimization model, where the intertemporal decision of an economic agent interacts with the hydrological system. In order to build this first economic growth model describing the interaction between the consumption and investment decisions of an economic agent and the occurrence of flooding events, we transform an existing descriptive stochastic model into an optimal deterministic model. The intermediate step is to formulate and simulate a descriptive deterministic model. We develop a periodic water function to approximate the former discrete stochastic time series of rainfall events. Due to the non-autonomous exogenous periodic rainfall function the long-term path of consumption and investment will be periodic.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Dual adaptive dynamic control of mobile robots using neural networks.
Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato
2009-02-01
This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.
Positive feedback can lead to dynamic nanometer-scale clustering on cell membranes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehrens, Martijn; Rein ten Wolde, Pieter; Mugler, Andrew, E-mail: amugler@purdue.edu
2014-11-28
Clustering of molecules on biological membranes is a widely observed phenomenon. A key example is the clustering of the oncoprotein Ras, which is known to be important for signal transduction in mammalian cells. Yet, the mechanism by which Ras clusters form and are maintained remains unclear. Recently, it has been discovered that activated Ras promotes further Ras activation. Here we show using particle-based simulation that this positive feedback is sufficient to produce persistent clusters of active Ras molecules at the nanometer scale via a dynamic nucleation mechanism. Furthermore, we find that our cluster statistics are consistent with experimental observations ofmore » the Ras system. Interestingly, we show that our model does not support a Turing regime of macroscopic reaction-diffusion patterning, and therefore that the clustering we observe is a purely stochastic effect, arising from the coupling of positive feedback with the discrete nature of individual molecules. These results underscore the importance of stochastic and dynamic properties of reaction diffusion systems for biological behavior.« less
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, M.; Jayko, K.; Bowles, A.
1986-10-01
A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.
Mimicking Nonequilibrium Steady States with Time-Periodic Driving
2016-08-29
nonequilibrium steady states, and vice versa, within the theoretical framework of discrete-state stochastic thermodynamics . Nonequilibrium steady states...equilibrium [2], spontaneous relaxation towards equilibrium [3], nonequilibrium steady states generated by fixed thermodynamic forces [4], and stochastic pumps...paradigm, a system driven by fixed thermodynamic forces—such as temperature gradients or chemical potential differences— reaches a steady state in
A parallel program for numerical simulation of discrete fracture network and groundwater flow
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Liou, Tai-Sheng; Kalatehjari, Roohollah
2017-04-01
The ability of modeling fluid flow in Discrete Fracture Network (DFN) is critical to various applications such as exploration of reserves in geothermal and petroleum reservoirs, geological sequestration of carbon dioxide and final disposal of spent nuclear fuels. Although several commerical or acdametic DFN flow simulators are already available (e.g., FracMan and DFNWORKS), challenges in terms of computational efficiency and three-dimensional visualization still remain, which therefore motivates this study for developing a new DFN and flow simulator. A new DFN and flow simulator, DFNbox, was written in C++ under a cross-platform software development framework provided by Qt. DFNBox integrates the following capabilities into a user-friendly drop-down menu interface: DFN simulation and clipping, 3D mesh generation, fracture data analysis, connectivity analysis, flow path analysis and steady-state grounwater flow simulation. All three-dimensional visualization graphics were developed using the free OpenGL API. Similar to other DFN simulators, fractures are conceptualized as random point process in space, with stochastic characteristics represented by orientation, size, transmissivity and aperture. Fracture meshing was implemented by Delaunay triangulation for visualization but not flow simulation purposes. Boundary element method was used for flow simulations such that only unknown head or flux along exterior and interection bounaries are needed for solving the flow field in the DFN. Parallel compuation concept was taken into account in developing DFNbox for calculations that such concept is possible. For example, the time-consuming seqential code for fracture clipping calculations has been completely replaced by a highly efficient parallel one. This can greatly enhance compuational efficiency especially on multi-thread platforms. Furthermore, DFNbox have been successfully tested in Windows and Linux systems with equally-well performance.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
NASA Astrophysics Data System (ADS)
Horowitz, Jordan M.
2015-07-01
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
Horowitz, Jordan M
2015-07-28
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcmahon, Benjamin
2009-01-01
We present our methodology and stochastic discrete-event simulation developed to model the screening of passengers for pandemic influenza at the US port-of-entry airports. Our model uniquely combines epidemiology modelling, evolving infected states and conditions of passengers over time, and operational considerations of screening in a single simulation. The simulation begins with international aircraft arrivals to the US. Passengers are then randomly assigned to one of three states -- not infected, infected with pandemic influenza and infected with other respiratory illness. Passengers then pass through various screening layers (i.e. pre-departure screening, en route screening, primary screening and secondary screening) and ultimatelymore » exit the system. We track the status of each passenger over time, with a special emphasis on false negatives (i.e. passengers infected with pandemic influenza, but are not identified as such) as these passengers pose a significant threat as they could unknowingly spread the pandemic influenza virus throughout our nation.« less
NASA Astrophysics Data System (ADS)
Torabi, H.; Pariz, N.; Karimpour, A.
2016-02-01
This paper investigates fractional Kalman filters when time-delay is entered in the observation signal in the discrete-time stochastic fractional order state-space representation. After investigating the common fractional Kalman filter, we try to derive a fractional Kalman filter for time-delay fractional systems. A detailed derivation is given. Fractional Kalman filters will be used to estimate recursively the states of fractional order state-space systems based on minimizing the cost function when there is a constant time delay (d) in the observation signal. The problem will be solved by converting the filtering problem to a usual d-step prediction problem for delay-free fractional systems.
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The component model is expressed as operating modes of the component and conditions for transitions between these various modes. Faults are modeled as transitions whose conditions for transitions are unknown (and have to be inferred through the reasoning process). Finally, the behavior of the components is expressed as a set of variables/ parameters and relations governing the interaction between the variables. The hybrid nature of the systems being modeled is captured by a combination of the above transitional model and behavioral model. Stochasticity is captured as probabilities associated with transitions (indicating the likelihood of that transition being taken), as well as noise on the sensed variables.
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
Gardiner, Bruce S.; Wong, Kelvin K. L.; Joldes, Grand R.; Rich, Addison J.; Tan, Chin Wee; Burgess, Antony W.; Smith, David W.
2015-01-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an ‘agent’, meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory. PMID:26452000
Gardiner, Bruce S; Wong, Kelvin K L; Joldes, Grand R; Rich, Addison J; Tan, Chin Wee; Burgess, Antony W; Smith, David W
2015-10-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an 'agent', meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory.
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-10-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem.
Mimicking Nonequilibrium Steady States with Time-Periodic Driving (Open Source)
2016-05-18
nonequilibrium steady states, and vice versa, within the theoretical framework of discrete-state stochastic thermodynamics . Nonequilibrium steady states...equilibrium [2], spontaneous relaxation towards equilibrium [3], nonequilibrium steady states generated by fixed thermodynamic forces [4], and stochastic pumps...paradigm, a system driven by fixed thermodynamic forces—such as temperature gradients or chemical potential differences— reaches a steady state in
NASA Astrophysics Data System (ADS)
Catanzaro, Michael J.; Chernyak, Vladimir Y.; Klein, John R.
2016-12-01
Driven Langevin processes have appeared in a variety of fields due to the relevance of natural phenomena having both deterministic and stochastic effects. The stochastic currents and fluxes in these systems provide a convenient set of observables to describe their non-equilibrium steady states. Here we consider stochastic motion of a (k - 1) -dimensional object, which sweeps out a k-dimensional trajectory, and gives rise to a higher k-dimensional current. By employing the low-temperature (low-noise) limit, we reduce the problem to a discrete Markov chain model on a CW complex, a topological construction which generalizes the notion of a graph. This reduction allows the mean fluxes and currents of the process to be expressed in terms of solutions to the discrete Supersymmetric Fokker-Planck (SFP) equation. Taking the adiabatic limit, we show that generic driving leads to rational quantization of the generated higher dimensional current. The latter is achieved by implementing the recently developed tools, coined the higher-dimensional Kirchhoff tree and co-tree theorems. This extends the study of motion of extended objects in the continuous setting performed in the prequel (Catanzaro et al.) to this manuscript.
Adaptive Microwave Staring Correlated Imaging for Targets Appearing in Discrete Clusters.
Tian, Chao; Jiang, Zheng; Chen, Weidong; Wang, Dongjin
2017-10-21
Microwave staring correlated imaging (MSCI) can achieve ultra-high resolution in real aperture staring radar imaging using the correlated imaging process (CIP) under all-weather and all-day circumstances. The CIP must combine the received echo signal with the temporal-spatial stochastic radiation field. However, a precondition of the CIP is that the continuous imaging region must be discretized to a fine grid, and the measurement matrix should be accurately computed, which makes the imaging process highly complex when the MSCI system observes a wide area. This paper proposes an adaptive imaging approach for the targets in discrete clusters to reduce the complexity of the CIP. The approach is divided into two main stages. First, as discrete clustered targets are distributed in different range strips in the imaging region, the transmitters of the MSCI emit narrow-pulse waveforms to separate the echoes of the targets in different strips in the time domain; using spectral entropy, a modified method robust against noise is put forward to detect the echoes of the discrete clustered targets, based on which the strips with targets can be adaptively located. Second, in a strip with targets, the matched filter reconstruction algorithm is used to locate the regions with targets, and only the regions of interest are discretized to a fine grid; sparse recovery is used, and the band exclusion is used to maintain the non-correlation of the dictionary. Simulation results are presented to demonstrate that the proposed approach can accurately and adaptively locate the regions with targets and obtain high-quality reconstructed images.
Hobolth, Asger; Stone, Eric A
2009-09-01
Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.
On the Computational Complexity of Stochastic Scheduling Problems,
1981-09-01
Survey": 1979, Ann. Discrete Math . 5, pp. 287-326. i I (.4) Karp, R.M., "Reducibility Among Combinatorial Problems": 1972, R.E. Miller and J.W...Weighted Completion Time Subject to Precedence Constraints": 1978, Ann. Discrete Math . 2, pp. 75-90. (8) Lawler, E.L. and J.W. Moore, "A Functional
Modeling and analysis of cell membrane systems with probabilistic model checking
2011-01-01
Background Recently there has been a growing interest in the application of Probabilistic Model Checking (PMC) for the formal specification of biological systems. PMC is able to exhaustively explore all states of a stochastic model and can provide valuable insight into its behavior which are more difficult to see using only traditional methods for system analysis such as deterministic and stochastic simulation. In this work we propose a stochastic modeling for the description and analysis of sodium-potassium exchange pump. The sodium-potassium pump is a membrane transport system presents in all animal cell and capable of moving sodium and potassium ions against their concentration gradient. Results We present a quantitative formal specification of the pump mechanism in the PRISM language, taking into consideration a discrete chemistry approach and the Law of Mass Action aspects. We also present an analysis of the system using quantitative properties in order to verify the pump reversibility and understand the pump behavior using trend labels for the transition rates of the pump reactions. Conclusions Probabilistic model checking can be used along with other well established approaches such as simulation and differential equations to better understand pump behavior. Using PMC we can determine if specific events happen such as the potassium outside the cell ends in all model traces. We can also have a more detailed perspective on its behavior such as determining its reversibility and why its normal operation becomes slow over time. This knowledge can be used to direct experimental research and make it more efficient, leading to faster and more accurate scientific discoveries. PMID:22369714
Alemani, Davide; Pappalardo, Francesco; Pennisi, Marzio; Motta, Santo; Brusic, Vladimir
2012-02-28
In the last decades the Lattice Boltzmann method (LB) has been successfully used to simulate a variety of processes. The LB model describes the microscopic processes occurring at the cellular level and the macroscopic processes occurring at the continuum level with a unique function, the probability distribution function. Recently, it has been tried to couple deterministic approaches with probabilistic cellular automata (probabilistic CA) methods with the aim to model temporal evolution of tumor growths and three dimensional spatial evolution, obtaining hybrid methodologies. Despite the good results attained by CA-PDE methods, there is one important issue which has not been completely solved: the intrinsic stochastic nature of the interactions at the interface between cellular (microscopic) and continuum (macroscopic) level. CA methods are able to cope with the stochastic phenomena because of their probabilistic nature, while PDE methods are fully deterministic. Even if the coupling is mathematically correct, there could be important statistical effects that could be missed by the PDE approach. For such a reason, to be able to develop and manage a model that takes into account all these three level of complexity (cellular, molecular and continuum), we believe that PDE should be replaced with a statistic and stochastic model based on the numerical discretization of the Boltzmann equation: The Lattice Boltzmann (LB) method. In this work we introduce a new hybrid method to simulate tumor growth and immune system, by applying Cellular Automata Lattice Boltzmann (CA-LB) approach. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
Towards Stability Analysis of Jump Linear Systems with State-Dependent and Stochastic Switching
NASA Technical Reports Server (NTRS)
Tejada, Arturo; Gonzalez, Oscar R.; Gray, W. Steven
2004-01-01
This paper analyzes the stability of hierarchical jump linear systems where the supervisor is driven by a Markovian stochastic process and by the values of the supervised jump linear system s states. The stability framework for this class of systems is developed over infinite and finite time horizons. The framework is then used to derive sufficient stability conditions for a specific class of hybrid jump linear systems with performance supervision. New sufficient stochastic stability conditions for discrete-time jump linear systems are also presented.
On orthogonality preserving quadratic stochastic operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhamedov, Farrukh; Taha, Muhammad Hafizuddin Mohd
2015-05-15
A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too.
Evaluation of a Stochastic Inactivation Model for Heat-Activated Spores of Bacillus spp. ▿
Corradini, Maria G.; Normand, Mark D.; Eisenberg, Murray; Peleg, Micha
2010-01-01
Heat activates the dormant spores of certain Bacillus spp., which is reflected in the “activation shoulder” in their survival curves. At the same time, heat also inactivates the already active and just activated spores, as well as those still dormant. A stochastic model based on progressively changing probabilities of activation and inactivation can describe this phenomenon. The model is presented in a fully probabilistic discrete form for individual and small groups of spores and as a semicontinuous deterministic model for large spore populations. The same underlying algorithm applies to both isothermal and dynamic heat treatments. Its construction does not require the assumption of the activation and inactivation kinetics or knowledge of their biophysical and biochemical mechanisms. A simplified version of the semicontinuous model was used to simulate survival curves with the activation shoulder that are reminiscent of experimental curves reported in the literature. The model is not intended to replace current models to predict dynamic inactivation but only to offer a conceptual alternative to their interpretation. Nevertheless, by linking the survival curve's shape to probabilities of events at the individual spore level, the model explains, and can be used to simulate, the irregular activation and survival patterns of individual and small groups of spores, which might be involved in food poisoning and spoilage. PMID:20453137
Fialho, André S; Oliveira, Mónica D; Sá, Armando B
2011-10-15
Recent reforms in Portugal aimed at strengthening the role of the primary care system, in order to improve the quality of the health care system. Since 2006 new policies aiming to change the organization, incentive structures and funding of the primary health care sector were designed, promoting the evolution of traditional primary health care centres (PHCCs) into a new type of organizational unit--family health units (FHUs). This study aimed to compare performances of PHCC and FHU organizational models and to assess the potential gains from converting PHCCs into FHUs. Stochastic discrete event simulation models for the two types of organizational models were designed and implemented using Simul8 software. These models were applied to data from nineteen primary care units in three municipalities of the Greater Lisbon area. The conversion of PHCCs into FHUs seems to have the potential to generate substantial improvements in productivity and accessibility, while not having a significant impact on costs. This conversion might entail a 45% reduction in the average number of days required to obtain a medical appointment and a 7% and 9% increase in the average number of medical and nursing consultations, respectively. Reorganization of PHCC into FHUs might increase accessibility of patients to services and efficiency in the provision of primary care services.
Novel branching particle method for tracking
NASA Astrophysics Data System (ADS)
Ballantyne, David J.; Chan, Hubert Y.; Kouritzin, Michael A.
2000-07-01
Particle approximations are used to track a maneuvering signal given only a noisy, corrupted sequence of observations, as are encountered in target tracking and surveillance. The signal exhibits nonlinearities that preclude the optimal use of a Kalman filter. It obeys a stochastic differential equation (SDE) in a seven-dimensional state space, one dimension of which is a discrete maneuver type. The maneuver type switches as a Markov chain and each maneuver identifies a unique SDE for the propagation of the remaining six state parameters. Observations are constructed at discrete time intervals by projecting a polygon corresponding to the target state onto two dimensions and incorporating the noise. A new branching particle filter is introduced and compared with two existing particle filters. The filters simulate a large number of independent particles, each of which moves with the stochastic law of the target. Particles are weighted, redistributed, or branched, depending on the method of filtering, based on their accordance with the current observation from the sequence. Each filter provides an approximated probability distribution of the target state given all back observations. All three particle filters converge to the exact conditional distribution as the number of particles goes to infinity, but differ in how well they perform with a finite number of particles. Using the exactly known ground truth, the root-mean-squared (RMS) errors in target position of the estimated distributions from the three filters are compared. The relative tracking power of the filters is quantified for this target at varying sizes, particle counts, and levels of observation noise.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Molecular dynamics simulations of thermally activated edge dislocation unpinning from voids in α -Fe
NASA Astrophysics Data System (ADS)
Byggmästar, J.; Granberg, F.; Nordlund, K.
2017-10-01
In this study, thermal unpinning of edge dislocations from voids in α -Fe is investigated by means of molecular dynamics simulations. The activation energy as a function of shear stress and temperature is systematically determined. Simulations with a constant applied stress are compared with dynamic simulations with a constant strain rate. We found that a constant applied stress results in a temperature-dependent activation energy. The temperature dependence is attributed to the elastic softening of iron. If the stress is normalized with the softening of the specific shear modulus, the activation energy is shown to be temperature-independent. From the dynamic simulations, the activation energy as a function of critical shear stress was determined using previously developed methods. The results from the dynamic simulations are in good agreement with the constant stress simulations, after the normalization. This indicates that the computationally more efficient dynamic method can be used to obtain the activation energy as a function of stress and temperature. The obtained relation between stress, temperature, and activation energy can be used to introduce a stochastic unpinning event in larger-scale simulation methods, such as discrete dislocation dynamics.
Montgomery, Erwin B.; He, Huang
2016-01-01
The efficacy of Deep Brain Stimulation (DBS) for an expanding array of neurological and psychiatric disorders demonstrates directly that DBS affects the basic electroneurophysiological mechanisms of the brain. The increasing array of active electrode configurations, stimulation currents, pulse widths, frequencies, and pulse patterns provides valuable tools to probe electroneurophysiological mechanisms. The extension of basic electroneurophysiological and anatomical concepts using sophisticated computational modeling and simulation has provided relatively straightforward explanations of all the DBS parameters except frequency. This article summarizes current thought about frequency and relevant observations. Current methodological and conceptual errors are critically examined in the hope that future work will not replicate these errors. One possible alternative theory is presented to provide a contrast to many current theories. DBS, conceptually, is a noisy discrete oscillator interacting with the basal ganglia–thalamic–cortical system of multiple re-entrant, discrete oscillators. Implications for positive and negative resonance, stochastic resonance and coherence, noisy synchronization, and holographic memory (related to movement generation) are presented. The time course of DBS neuronal responses demonstrates evolution of the DBS response consistent with the dynamics of re-entrant mechanisms. Finally, computational modeling demonstrates identical dynamics as seen by neuronal activities recorded from human and nonhuman primates, illustrating the differences of discrete from continuous harmonic oscillators and the power of conceptualizing the nervous system as composed on interacting discrete nonlinear oscillators. PMID:27548234
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Artificial Neural Network Metamodels of Stochastic Computer Simulations
1994-08-10
SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.
Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C
2013-08-01
To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.
Driven Langevin systems: fluctuation theorems and faithful dynamics
NASA Astrophysics Data System (ADS)
Sivak, David; Chodera, John; Crooks, Gavin
2014-03-01
Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
Guerrier, Claire; Holcman, David
2016-10-18
Binding of molecules, ions or proteins to small target sites is a generic step of cell activation. This process relies on rare stochastic events where a particle located in a large bulk has to find small and often hidden targets. We present here a hybrid discrete-continuum model that takes into account a stochastic regime governed by rare events and a continuous regime in the bulk. The rare discrete binding events are modeled by a Markov chain for the encounter of small targets by few Brownian particles, for which the arrival time is Poissonian. The large ensemble of particles is described by mass action laws. We use this novel model to predict the time distribution of vesicular release at neuronal synapses. Vesicular release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the entry through calcium channels. We report here that the distribution of release time is bimodal although it is triggered by a single fast action potential. While the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the synaptic terminal to small binding vesicular targets. To conclude, the present multiscale stochastic modeling approach allows studying cellular events based on integrating discrete molecular events over several time scales.
Using cellular automata to simulate forest fire propagation in Portugal
NASA Astrophysics Data System (ADS)
Freire, Joana; daCamara, Carlos
2017-04-01
Wildfires in the Mediterranean region have severe damaging effects mainly due to large fire events [1, 2]. When restricting to Portugal, wildfires have burned over 1:4 million ha in the last decade. Considering the increasing tendency in the extent and severity of wildfires [1, 2], the availability of modeling tools of fire episodes is of crucial importance. Two main types of mathematical models are generally available, namely deterministic and stochastic models. Deterministic models attempt a description of fires, fuel and atmosphere as multiphase continua prescribing mass, momentum and energy conservation, which typically leads to systems of coupled PDEs to be solved numerically on a grid. Simpler descriptions, such as FARSITE, neglect the interaction with atmosphere and propagate the fire front using wave techniques. One of the most important stochastic models are Cellular Automata (CA), in which space is discretized into cells, and physical quantities take on a finite set of values at each cell. The cells evolve in discrete time according to a set of transition rules, and the states of the neighboring cells. In the present work, we implement and then improve a simple and fast CA model designed to operationally simulate wildfires in Portugal. The reference CA model chosen [3] has the advantage of having been applied successfully in other Mediterranean ecosystems, namely to historical fires in Greece. The model is defined on a square grid with propagation to 8 nearest and next-nearest neighbors, where each cell is characterized by 4 possible discrete states, corresponding to burning, not-yet burned, fuel-free and completely burned cells, with 4 possible rules of evolution which take into account fuel properties, meteorological conditions, and topography. As a CA model, it offers the possibility to run a very high number of simulations in order to verify and apply the model, and is easily modified by implementing additional variables and different rules for the evolution of the fire spread. We present and discuss the application of the CA model to the "Tavira wildfire" in which approximately 24,800ha were burned. The event took place in summer 2012, between July 18 and 21, and spread in the Tavira and São Brás de Alportel municipalities of Algarve, a province in the southern coast of Portugal. [1] DaCamara et. al. (2014), International Journal of Wildland Fire 23. [2] Amraoui et. al. (2013), Forest Ecology and Management 294. [3] Alexandridis et. al. (2008), Applied Mathematics and Computation 204.
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Disease management research using event graphs.
Allore, H G; Schruben, L W
2000-08-01
Event Graphs, conditional representations of stochastic relationships between discrete events, simulate disease dynamics. In this paper, we demonstrate how Event Graphs, at an appropriate abstraction level, also extend and organize scientific knowledge about diseases. They can identify promising treatment strategies and directions for further research and provide enough detail for testing combinations of new medicines and interventions. Event Graphs can be enriched to incorporate and validate data and test new theories to reflect an expanding dynamic scientific knowledge base and establish performance criteria for the economic viability of new treatments. To illustrate, an Event Graph is developed for mastitis, a costly dairy cattle disease, for which extensive scientific literature exists. With only a modest amount of imagination, the methodology presented here can be seen to apply modeling to any disease, human, plant, or animal. The Event Graph simulation presented here is currently being used in research and in a new veterinary epidemiology course. Copyright 2000 Academic Press.
NASA Technical Reports Server (NTRS)
Golias, Mihalis M.
2011-01-01
Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.
Fluctuation Induced Almost Invariant Sets
2006-12-28
Planck) for the probability density function [15], discrete dynamical systems benefit from using the Frobenius - Perron operator (FP) formalism [12] to...from epidemiology. Conclusions are summarized in Section 4. II. GENERAL THEORY A. Stochastic Frobenius - Perron operator We define the Frobenius - Perron ...function ν(x). The stochastic Frobenius - 3 Perron (SFP) operator is defined to be PF [ρ(x)] = ∫ M ν(x− F (y))ρ(y)dy. (4) For our applications, we will
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Pricing of swing options: A Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Leow, Kai-Siong
We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
Phase computations and phase models for discrete molecular oscillators.
Suvak, Onder; Demir, Alper
2012-06-11
Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations.
Phase computations and phase models for discrete molecular oscillators
2012-01-01
Background Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. Results In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. Conclusions The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations. PMID:22687330
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Jordan M., E-mail: jordan.horowitz@umb.edu
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochasticmore » thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.« less
Adaptive Decision Making Using Probabilistic Programming and Stochastic Optimization
2018-01-01
world optimization problems (and hence 16 Approved for Public Release (PA); Distribution Unlimited Pred. demand (uncertain; discrete ...simplify the setting, we further assume that the demands are discrete , taking on values d1, . . . , dk with probabilities (conditional on x) (pθ)i ≡ p...Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math ., 2009. Anthony V Fiacco and Yo Ishizuka. Sensitivity and stability
Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com
We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.
Stochastic Adaptive Estimation and Control.
1994-10-26
Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Selected-node stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Duso, Lorenzo; Zechner, Christoph
2018-04-01
Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.
A continuous stochastic model for non-equilibrium dense gases
NASA Astrophysics Data System (ADS)
Sadr, M.; Gorji, M. H.
2017-12-01
While accurate simulations of dense gas flows far from the equilibrium can be achieved by direct simulation adapted to the Enskog equation, the significant computational demand required for collisions appears as a major constraint. In order to cope with that, an efficient yet accurate solution algorithm based on the Fokker-Planck approximation of the Enskog equation is devised in this paper; the approximation is very much associated with the Fokker-Planck model derived from the Boltzmann equation by Jenny et al. ["A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion," J. Comput. Phys. 229, 1077-1098 (2010)] and Gorji et al. ["Fokker-Planck model for computational studies of monatomic rarefied gas flows," J. Fluid Mech. 680, 574-601 (2011)]. The idea behind these Fokker-Planck descriptions is to project the dynamics of discrete collisions implied by the molecular encounters into a set of continuous Markovian processes subject to the drift and diffusion. Thereby, the evolution of particles representing the governing stochastic process becomes independent from each other and thus very efficient numerical schemes can be constructed. By close inspection of the Enskog operator, it is observed that the dense gas effects contribute further to the advection of molecular quantities. That motivates a modelling approach where the dense gas corrections can be cast in the extra advection of particles. Therefore, the corresponding Fokker-Planck approximation is derived such that the evolution in the physical space accounts for the dense effects present in the pressure, stress tensor, and heat fluxes. Hence the consistency between the devised Fokker-Planck approximation and the Enskog operator is shown for the velocity moments up to the heat fluxes. For validation studies, a homogeneous gas inside a box besides Fourier, Couette, and lid-driven cavity flow setups is considered. The results based on the Fokker-Planck model are compared with respect to benchmark simulations, where good agreement is found for the flow field along with the transport properties.
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan
Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less
NASA Technical Reports Server (NTRS)
Rabadi, Ghaith
2005-01-01
A significant portion of lifecycle costs for launch vehicles are generated during the operations phase. Research indicates that operations costs can account for a large percentage of the total life-cycle costs of reusable space transportation systems. These costs are largely determined by decisions made early during conceptual design. Therefore, operational considerations are an important part of vehicle design and concept analysis process that needs to be modeled and studied early in the design phase. However, this is a difficult and challenging task due to uncertainties of operations definitions, the dynamic and combinatorial nature of the processes, and lack of analytical models and the scarcity of historical data during the conceptual design phase. Ultimately, NASA would like to know the best mix of launch vehicle concepts that would meet the missions launch dates at the minimum cost. To answer this question, we first need to develop a model to estimate the total cost, including the operational cost, to accomplish this set of missions. In this project, we have developed and implemented a discrete-event simulation model using ARENA (a simulation modeling environment) to determine this cost assessment. Discrete-event simulation is widely used in modeling complex systems, including transportation systems, due to its flexibility, and ability to capture the dynamics of the system. The simulation model accepts manifest inputs including the set of missions that need to be accomplished over a period of time, the clients (e.g., NASA or DoD) who wish to transport the payload to space, the payload weights, and their destinations (e.g., International Space Station, LEO, or GEO). A user of the simulation model can define an architecture of reusable or expendable launch vehicles to achieve these missions. Launch vehicles may belong to different families where each family may have it own set of resources, processing times, and cost factors. The goal is to capture the required resource levels of the major launch elements and their required facilities. The model s output can show whether or not a certain architecture of vehicles can meet the launch dates, and if not, how much the delay cost would be. It will also produce aggregate figures of missions cost based on element procurement cost, processing cost, cargo integration cost, delay cost, and mission support cost. One of the most useful features of this model is that it is stochastic where it accepts statistical distributions to represent the processing times mimicking the stochastic nature of real systems.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ruan, Xiaoe
2017-07-01
This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.
Explore Stochastic Instabilities of Periodic Points by Transition Path Theory
NASA Astrophysics Data System (ADS)
Cao, Yu; Lin, Ling; Zhou, Xiang
2016-06-01
We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
Frasca, Mattia; Sharkey, Kieran J
2016-06-21
Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Persistence of non-Markovian Gaussian stationary processes in discrete time
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Lizana, Ludvig
2018-04-01
The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time n . Few results are known for the persistence P0(n ) in discrete time, except the large time behavior which is characterized by the nontrivial constant θ through P0(n ) ˜θn . Using a modified version of the independent interval approximation (IIA) that we developed before, we are able to calculate P0(n ) analytically in z -transform space in terms of the autocorrelation function A (n ) . If A (n )→0 as n →∞ , we extract θ numerically, while if A (n )=0 , for finite n >N , we find θ exactly (within the IIA). We apply our results to three special cases: the nearest-neighbor-correlated "first order moving average process", where A (n )=0 for n >1 , the double exponential-correlated "second order autoregressive process", where A (n ) =c1λ1n+c2λ2n , and power-law-correlated variables, where A (n ) ˜n-μ . Apart from the power-law case when μ <5 , we find excellent agreement with simulations.
Stochastic switching in biology: from genotype to phenotype
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-03-01
There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.
ERIC Educational Resources Information Center
Weiss, Charles J.
2017-01-01
An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…
THE DISTRIBUTION OF ROUNDS FIRED IN STOCHASTIC DUELS
This paper continues the development of the theory of Stochastic Duels to include the distribution of the number of rounds fired. Most generally...the duel between two contestants who fire at each other with constant kill probabilities per round is considered. The time between rounds fired may be...at the beginning of the duel may be limited and is a discrete random variable. Besides the distribution of rounds fired, its first two moments and
MONALISA for stochastic simulations of Petri net models of biochemical systems.
Balazki, Pavel; Lindauer, Klaus; Einloft, Jens; Ackermann, Jörg; Koch, Ina
2015-07-10
The concept of Petri nets (PN) is widely used in systems biology and allows modeling of complex biochemical systems like metabolic systems, signal transduction pathways, and gene expression networks. In particular, PN allows the topological analysis based on structural properties, which is important and useful when quantitative (kinetic) data are incomplete or unknown. Knowing the kinetic parameters, the simulation of time evolution of such models can help to study the dynamic behavior of the underlying system. If the number of involved entities (molecules) is low, a stochastic simulation should be preferred against the classical deterministic approach of solving ordinary differential equations. The Stochastic Simulation Algorithm (SSA) is a common method for such simulations. The combination of the qualitative and semi-quantitative PN modeling and stochastic analysis techniques provides a valuable approach in the field of systems biology. Here, we describe the implementation of stochastic analysis in a PN environment. We extended MONALISA - an open-source software for creation, visualization and analysis of PN - by several stochastic simulation methods. The simulation module offers four simulation modes, among them the stochastic mode with constant firing rates and Gillespie's algorithm as exact and approximate versions. The simulator is operated by a user-friendly graphical interface and accepts input data such as concentrations and reaction rate constants that are common parameters in the biological context. The key features of the simulation module are visualization of simulation, interactive plotting, export of results into a text file, mathematical expressions for describing simulation parameters, and up to 500 parallel simulations of the same parameter sets. To illustrate the method we discuss a model for insulin receptor recycling as case study. We present a software that combines the modeling power of Petri nets with stochastic simulation of dynamic processes in a user-friendly environment supported by an intuitive graphical interface. The program offers a valuable alternative to modeling, using ordinary differential equations, especially when simulating single-cell experiments with low molecule counts. The ability to use mathematical expressions provides an additional flexibility in describing the simulation parameters. The open-source distribution allows further extensions by third-party developers. The software is cross-platform and is licensed under the Artistic License 2.0.
Modelling biochemical reaction systems by stochastic differential equations with reflection.
Niu, Yuanling; Burrage, Kevin; Chen, Luonan
2016-05-07
In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Huang, Wei; Shi, Jun; Yen, R T
2012-12-01
The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.
Noise analysis of genome-scale protein synthesis using a discrete computational model of translation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch; Swiss Institute of Bioinformatics
2015-07-28
Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as howmore » mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.« less
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multiscale stochastic simulations of chemical reactions with regulated scale separation
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros; Feigelman, Justin
2013-07-01
We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
Stochastic Human Exposure and Dose Simulation Model for Pesticides
SHEDS-Pesticides (Stochastic Human Exposure and Dose Simulation Model for Pesticides) is a physically-based stochastic model developed to quantify exposure and dose of humans to multimedia, multipathway pollutants. Probabilistic inputs are combined in physical/mechanistic algorit...
Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2018-05-01
In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.
Zonostrophic instability driven by discrete particle noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Onge, D. A.; Krommes, J. A.
The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less
NASA Astrophysics Data System (ADS)
Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang
2014-07-01
This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.
Zonostrophic instability driven by discrete particle noise
St-Onge, D. A.; Krommes, J. A.
2017-04-01
The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less
The Flow Dimension and Aquifer Heterogeneity: Field evidence and Numerical Analyses
NASA Astrophysics Data System (ADS)
Walker, D. D.; Cello, P. A.; Valocchi, A. J.; Roberts, R. M.; Loftis, B.
2008-12-01
The Generalized Radial Flow approach to hydraulic test interpretation infers the flow dimension to describe the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling. The comparison shows that discrete linear features with lengths distributed as a power-law appear to be the most consistent with observations of the flow dimension in fractured dolomite aquifers.
The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations
2011-12-01
modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism
Calibration of DEM parameters on shear test experiments using Kriging method
NASA Astrophysics Data System (ADS)
Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier
2017-06-01
Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.
NASA Astrophysics Data System (ADS)
Rimo, Tan Hauw Sen; Chai Tin, Ong
2017-12-01
Capacity utilization (CU) measurement is an important task in a manufacturing system, especially in make-to-order (MTO) type manufacturing system with product customization, in predicting capacity to meet future demand. A stochastic discrete-event simulation is developed using ARENA software to determine CU and capacity gap (CG) in short run production function. This study focused on machinery breakdown and product defective rate as random variables in the simulation. The study found that the manufacturing system run in 68.01% CU and 31.99% CG. It is revealed that machinery breakdown and product defective rate have a direct relationship with CU. By improving product defective rate into zero defect, manufacturing system can improve CU up to 73.56% and CG decrease to 26.44%. While improving machinery breakdown into zero breakdowns will improve CU up to 93.99% and the CG decrease to 6.01%. This study helps operation level to study CU using “what-if” analysis in order to meet future demand in more practical and easier method by using simulation approach. Further study is recommended by including other random variables that affect CU to make the simulation closer with the real-life situation for a better decision.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-01
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes.
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-28
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kineticsmore » resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.« less
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K
2011-04-15
The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.
Reconstruction of the modified discrete Langevin equation from persistent time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czechowski, Zbigniew
The discrete Langevin-type equation, which can describe persistent processes, was introduced. The procedure of reconstruction of the equation from time series was proposed and tested on synthetic data, with short and long-tail distributions, generated by different Langevin equations. Corrections due to the finite sampling rates were derived. For an exemplary meteorological time series, an appropriate Langevin equation, which constitutes a stochastic macroscopic model of the phenomenon, was reconstructed.
Approximate Dynamic Programming and Aerial Refueling
2007-06-01
by two Army Air Corps de Havilland DH -4Bs (9). While crude by modern standards, the passing of hoses be- tween planes is effectively the same approach...incorporating stochastic data sets. . . . . . . . . . . 106 55 Total Cost Stochastically Trained Simulations versus Deterministically Trained Simulations...incorporating stochastic data sets. 106 To create meaningful results when testing stochastic data, the data sets are av- eraged so that conclusions are not
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
NASA Astrophysics Data System (ADS)
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
NASA Astrophysics Data System (ADS)
Hyman, J. D.; Aldrich, G.; Viswanathan, H.; Makedonska, N.; Karra, S.
2016-08-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected so that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. These observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.
NASA Astrophysics Data System (ADS)
Hyman, J.; Aldrich, G. A.; Viswanathan, H. S.; Makedonska, N.; Karra, S.
2016-12-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semi-correlation, and non-correlation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected so that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same.We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. These observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.
Stochastic model of cell rearrangements in convergent extension of ascidian notochord
NASA Astrophysics Data System (ADS)
Lubkin, Sharon; Backes, Tracy; Latterman, Russell; Small, Stephen
2007-03-01
We present a discrete stochastic cell based model of convergent extension of the ascidian notochord. Our work derives from research that clarifies the coupling of invagination and convergent extension in ascidian notochord morphogenesis (Odell and Munro, 2002). We have tested the roles of cell-cell adhesion, cell-extracellular matrix adhesion, random motion, and extension of individual cells, as well as the presence or absence of various tissue types, and determined which factors are necessary and/or sufficient for convergent extension.
Structured population dynamics: continuous size and discontinuous stage structures.
Buffoni, Giuseppe; Pasquali, Sara
2007-04-01
A nonlinear stochastic model for the dynamics of a population with either a continuous size structure or a discontinuous stage structure is formulated in the Eulerian formalism. It takes into account dispersion effects due to stochastic variability of the development process of the individuals. The discrete equations of the numerical approximation are derived, and an analysis of the existence and stability of the equilibrium states is performed. An application to a copepod population is illustrated; numerical results of Eulerian and Lagrangian models are compared.
Integrated Human Behavior Modeling and Stochastic Control (IHBMSC)
2014-08-01
T after the outcome of two inspections has become available is calculated as in the Kalman filtering paradigm. First and foremost, n = 2 is adequate...output, the probability P2(T |·, ·) that the inspected object is a T is calculated—see equations (5.7, 5.8) where a discrete Kalman filtering ...information or value. The behavior of the Stochastic Controller can be usefully compared to a 2-stage screen or a 2-stage filter . The 1st stage of the
Stochastic Differential Games with Complexity Constrained Strategies.
1982-03-01
Stochastic Differential Game ..... . 39 2.-1 A b.mp.C mcamp e ..... .... ................ . ..... qu CHAPTER 3 - PROBLEM OF STATE ESTDAATION IN TWO...similar to that used vith the differential game , e vould find that the optimal K has the form K T[T* + ( 2.58) This is not a surprising ansver in viev...Examle Example: Discrete-time, one-stage scalar game Transition equation: Y X + U - V P-offtfuntinl: J E + {5 2 CV Cc~ c>a> 0 Observation equations: Z x
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEneaney, William M.
2004-08-15
Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1975-01-01
This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.
Mechanisms for the target patterns formation in a stochastic bistable excitable medium
NASA Astrophysics Data System (ADS)
Verisokin, Andrey Yu.; Verveyko, Darya V.; Postnov, Dmitry E.
2018-04-01
We study the features of formation and evolution of spatiotemporal chaotic regime generated by autonomous pacemakers in excitable deterministic and stochastic bistable active media using the example of the FitzHugh - Nagumo biological neuron model under discrete medium conditions. The following possible mechanisms for the formation of autonomous pacemakers have been studied: 1) a temporal external force applied to a small region of the medium, 2) geometry of the solution region (the medium contains regions with Dirichlet or Neumann boundaries). In our work we explore the conditions for the emergence of pacemakers inducing target patterns in a stochastic bistable excitable system and propose the algorithm for their analysis.
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
NASA Astrophysics Data System (ADS)
Hardebol, N. J.; Maier, C.; Nick, H.; Geiger, S.; Bertotti, G.; Boro, H.
2015-12-01
A fracture network arrangement is quantified across an isolated carbonate platform from outcrop and aerial imagery to address its impact on fluid flow. The network is described in terms of fracture density, orientation, and length distribution parameters. Of particular interest is the role of fracture cross connections and abutments on the effective permeability. Hence, the flow simulations explicitly account for network topology by adopting Discrete-Fracture-and-Matrix description. The interior of the Latemar carbonate platform (Dolomites, Italy) is taken as outcrop analogue for subsurface reservoirs of isolated carbonate build-ups that exhibit a fracture-dominated permeability. New is our dual strategy to describe the fracture network both as deterministic- and stochastic-based inputs for flow simulations. The fracture geometries are captured explicitly and form a multiscale data set by integration of interpretations from outcrops, airborne imagery, and lidar. The deterministic network descriptions form the basis for descriptive rules that are diagnostic of the complex natural fracture arrangement. The fracture networks exhibit a variable degree of multitier hierarchies with smaller-sized fractures abutting against larger fractures under both right and oblique angles. The influence of network topology on connectivity is quantified using Discrete-Fracture-Single phase fluid flow simulations. The simulation results show that the effective permeability for the fracture and matrix ensemble can be 50 to 400 times higher than the matrix permeability of 1.0 · 10-14 m2. The permeability enhancement is strongly controlled by the connectivity of the fracture network. Therefore, the degree of intersecting and abutting fractures should be captured from outcrops with accuracy to be of value as analogue.
NASA Astrophysics Data System (ADS)
Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.
2016-12-01
State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Multifractal analysis of time series generated by discrete Ito equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele
2015-06-15
In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-11-30
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.
2016-01-01
Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421
Study of selected phenotype switching strategies in time varying environment
NASA Astrophysics Data System (ADS)
Horvath, Denis; Brutovsky, Branislav
2016-03-01
Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback-Leibler functional distances and the Hamming distance.
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
NASA Astrophysics Data System (ADS)
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Chen, Yuhang; Zhou, Shiwei; Li, Qing
2011-03-01
The degradation of polymeric biomaterials, which are widely exploited in tissue engineering and drug delivery systems, has drawn significant attention in recent years. This paper aims to develop a mathematical model that combines stochastic hydrolysis and mass transport to simulate the polymeric degradation and erosion process. The hydrolysis reaction is modeled in a discrete fashion by a fundamental stochastic process and an additional autocatalytic effect induced by the local carboxylic acid concentration in terms of the continuous diffusion equation. Illustrative examples of microparticles and tissue scaffolds demonstrate the applicability of the model. It is found that diffusive transport plays a critical role in determining the degradation pathway, whilst autocatalysis makes the degradation size dependent. The modeling results show good agreement with experimental data in the literature, in which the hydrolysis rate, polymer architecture and matrix size actually work together to determine the characteristics of the degradation and erosion processes of bulk-erosive polymer devices. The proposed degradation model exhibits great potential for the design optimization of drug carriers and tissue scaffolds. Copyright © 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
An agent-based stochastic Occupancy Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
An agent-based stochastic Occupancy Simulator
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
2017-06-01
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Noise in Nonlinear Dynamical Systems 3 Volume Paperback Set
NASA Astrophysics Data System (ADS)
Moss, Frank; McClintock, P. V. E.
2011-11-01
Volume 1: List of contributors; Preface; Introduction to volume one; 1. Noise-activated escape from metastable states: an historical view Rolf Landauer; 2. Some Markov methods in the theory of stochastic processes in non-linear dynamical systems R. L. Stratonovich; 3. Langevin equations with coloured noise J. M. Sancho and M. San Miguel; 4. First passage time problems for non-Markovian processes Katja Lindenberg, Bruce J. West and Jaume Masoliver; 5. The projection approach to the Fokker-Planck equation: applications to phenomenological stochastic equations with coloured noises Paolo Grigolini; 6. Methods for solving Fokker-Planck equations with applications to bistable and periodic potentials H. Risken and H. D. Vollmer; 7. Macroscopic potentials, bifurcations and noise in dissipative systems Robert Graham; 8. Transition phenomena in multidimensional systems - models of evolution W. Ebeling and L. Schimansky-Geier; 9. Coloured noise in continuous dynamical systems: a functional calculus approach Peter Hanggi; Appendix. On the statistical treatment of dynamical systems L. Pontryagin, A. Andronov and A. Vitt; Index. Volume 2: List of contributors; Preface; Introduction to volume two; 1. Stochastic processes in quantum mechanical settings Ronald F. Fox; 2. Self-diffusion in non-Markovian condensed-matter systems Toyonori Munakata; 3. Escape from the underdamped potential well M. Buttiker; 4. Effect of noise on discrete dynamical systems with multiple attractors Edgar Knobloch and Jeffrey B. Weiss; 5. Discrete dynamics perturbed by weak noise Peter Talkner and Peter Hanggi; 6. Bifurcation behaviour under modulated control parameters M. Lucke; 7. Period doubling bifurcations: what good are they? Kurt Wiesenfeld; 8. Noise-induced transitions Werner Horsthemke and Rene Lefever; 9. Mechanisms for noise-induced transitions in chemical systems Raymond Kapral and Edward Celarier; 10. State selection dynamics in symmetry-breaking transitions Dilip K. Kondepudi; 11. Noise in a ring-laser gyroscope K. Vogel, H. Risken and W. Schleich; 12. Control of noise and applications to optical systems L. A. Lugiato, G. Broggi, M. Merri and M. A. Pernigo; 13. Transition probabilities and spectral density of fluctuations of noise driven bistable systems M. I. Dykman, M. A. Krivoglaz and S. M. Soskin; Index. Volume 3: List of contributors; Preface; Introduction to volume three; 1. The effects of coloured quadratic noise on a turbulent transition in liquid He II J. T. Tough; 2. Electrohydrodynamic instability of nematic liquid crystals: growth process and influence of noise S. Kai; 3. Suppression of electrohydrodynamic instabilities by external noise Helmut R. Brand; 4. Coloured noise in dye laser fluctuations R. Roy, A. W. Yu and S. Zhu; 5. Noisy dynamics in optically bistable systems E. Arimondo, D. Hennequin and P. Glorieux; 6. Use of an electronic model as a guideline in experiments on transient optical bistability W. Lange; 7. Computer experiments in nonlinear stochastic physics Riccardo Mannella; 8. Analogue simulations of stochastic processes by means of minimum component electronic devices Leone Fronzoni; 9. Analogue techniques for the study of problems in stochastic nonlinear dynamics P. V. E. McClintock and Frank Moss; Index.
NASA Astrophysics Data System (ADS)
Shofa, M. J.; Moeis, A. O.; Restiana, N.
2018-04-01
MRP as a production planning system is appropriate for the deterministic environment. Unfortunately, most production systems such as customer demands are stochastic, so that MRP is inappropriate at the time. Demand-Driven MRP (DDMRP) is new approach for production planning system dealing with demand uncertainty. The objective of this paper is to compare the MRP and DDMRP for purchased part under long lead time and uncertain demand in terms of average inventory levels. The evaluation is conducted through a discrete event simulation with the long lead time and uncertain demand scenarios. The next step is evaluating the performance of DDMRP by comparing the inventory level of DDMRP with MRP. As result, DDMRP is more effective production planning than MRP in terms of average inventory levels.
NASA Technical Reports Server (NTRS)
Fridlind, Ann; Seifert, Axel; Ackerman, Andrew; Jensen, Eric
2004-01-01
Numerical models that resolve cloud particles into discrete mass size distributions on an Eulerian grid provide a uniquely powerful means of studying the closely coupled interaction of aerosols, cloud microphysics, and transport that determine cloud properties and evolution. However, such models require many experimentally derived paramaterizations in order to properly represent the complex interactions of droplets within turbulent flow. Many of these parameterizations remain poorly quantified, and the numerical methods of solving the equations for temporal evolution of the mass size distribution can also vary considerably in terms of efficiency and accuracy. In this work, we compare results from two size-resolved microphysics models that employ various widely-used parameterizations and numerical solution methods for several aspects of stochastic collection.
Mimicking Nonequilibrium Steady States with Time-Periodic Driving
NASA Astrophysics Data System (ADS)
Raz, O.; Subaşı, Y.; Jarzynski, C.
2016-04-01
Under static conditions, a system satisfying detailed balance generically relaxes to an equilibrium state in which there are no currents. To generate persistent currents, either detailed balance must be broken or the system must be driven in a time-dependent manner. A stationary system that violates detailed balance evolves to a nonequilibrium steady state (NESS) characterized by fixed currents. Conversely, a system that satisfies instantaneous detailed balance but is driven by the time-periodic variation of external parameters—also known as a stochastic pump (SP)—reaches a periodic state with nonvanishing currents. In both cases, these currents are maintained at the cost of entropy production. Are these two paradigmatic scenarios effectively equivalent? For discrete-state systems, we establish a mapping between nonequilibrium stationary states and stochastic pumps. Given a NESS characterized by a particular set of stationary probabilities, currents, and entropy production rates, we show how to construct a SP with exactly the same (time-averaged) values. The mapping works in the opposite direction as well. These results establish a proof of principle: They show that stochastic pumps are able to mimic the behavior of nonequilibrium steady states, and vice versa, within the theoretical framework of discrete-state stochastic thermodynamics. Nonequilibrium steady states and stochastic pumps are often used to model, respectively, biomolecular motors driven by chemical reactions and artificial molecular machines steered by the variation of external, macroscopic parameters. Our results loosely suggest that anything a biomolecular machine can do, an artificial molecular machine can do equally well. We illustrate this principle by showing that kinetic proofreading, a NESS mechanism that explains the low error rates in biochemical reactions, can be effectively mimicked by a constrained periodic driving.
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
Temporal Coherence: A Model for Non-Stationarity in Natural and Simulated Wind Records
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinker, Jennifer M.; Gavin, Henri P.; Clifton, Andrew
We present a novel methodology for characterizing and simulating non-stationary stochastic wind records. In this new method, non-stationarity is characterized and modelled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components. Temporal coherence can also be used to quantify non-stationary characteristics in wind data. Three case studies are presented that analyze the non-stationarity of turbulent wind data obtained at the National Wind Technology Center near Boulder, Colorado, USA. The first study compares the temporal and spectral characteristics of a stationary wind record and a non-stationary windmore » record in order to highlight their differences in temporal coherence. The second study examines the distribution of one of the proposed temporal coherence parameters and uses it to quantify the prevalence of nonstationarity in the dataset. The third study examines how temporal coherence varies with a range of atmospheric parameters to determine what conditions produce more non-stationarity.« less
A Control Simulation Method of High-Speed Trains on Railway Network with Irregular Influence
NASA Astrophysics Data System (ADS)
Yang, Li-Xing; Li, Xiang; Li, Ke-Ping
2011-09-01
Based on the discrete time method, an effective movement control model is designed for a group of highspeed trains on a rail network. The purpose of the model is to investigate the specific traffic characteristics of high-speed trains under the interruption of stochastic irregular events. In the model, the high-speed rail traffic system is supposed to be equipped with the moving-block signalling system to guarantee maximum traversing capacity of the railway. To keep the safety of trains' movements, some operational strategies are proposed to control the movements of trains in the model, including traction operation, braking operation, and entering-station operation. The numerical simulations show that the designed model can well describe the movements of high-speed trains on the rail network. The research results can provide the useful information not only for investigating the propagation features of relevant delays under the irregular disturbance but also for rerouting and rescheduling trains on the rail network.
Dombrowski, Kirk; Khan, Bilal; Habecker, Patrick; Hagan, Holly; Friedman, Samuel R; Saad, Mohamed
2017-04-01
This article explores how social network dynamics may have reduced the spread of HIV-1 infection among people who inject drugs during the early years of the epidemic. Stochastic, discrete event, agent-based simulations are used to test whether a "firewall effect" can arise out of self-organizing processes at the actor level, and whether such an effect can account for stable HIV prevalence rates below population saturation. Repeated simulation experiments show that, in the presence of recurring, acute, and highly infectious outbreaks, micro-network structures combine with the HIV virus's natural history to reduce the spread of the disease. These results indicate that network factors likely played a significant role in the prevention of HIV infection within injection risk networks during periods of peak prevalence. They also suggest that social forces that disturb network connections may diminish the natural firewall effect and result in higher rates of HIV.
Dombrowski, Kirk; Khan, Bilal; Habecker, Patrick; Hagan, Holly; Friedman, Samuel R.; Saad, Mohamed
2016-01-01
This article explores how social network dynamics may have reduced the spread of HIV-1 infection among people who inject drugs during the early years of the epidemic. Stochastic, discrete event, agent-based simulations are used to test whether a “firewall effect” can arise out of self-organizing processes at the actor level, and whether such an effect can account for stable HIV prevalence rates below population saturation. Repeated simulation experiments show that, in the presence of recurring, acute, and highly infectious outbreaks, micro-network structures combine with the HIV virus’s natural history to reduce the spread of the disease. These results indicate that network factors likely played a significant role in the prevention of HIV infection within injection risk networks during periods of peak prevalence. They also suggest that social forces that disturb network connections may diminish the natural firewall effect and result in higher rates of HIV. PMID:27699596
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models.
Drugowitsch, Jan
2016-02-11
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models' parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation.
Diffusive transport in the presence of stochastically gated absorption
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Karamched, Bhargav R.; Lawley, Sean D.; Levien, Ethan
2017-08-01
We analyze a population of Brownian particles moving in a spatially uniform environment with stochastically gated absorption. The state of the environment at time t is represented by a discrete stochastic variable k (t )∈{0 ,1 } such that the rate of absorption is γ [1 -k (t )] , with γ a positive constant. The variable k (t ) evolves according to a two-state Markov chain. We focus on how stochastic gating affects the attenuation of particle absorption with distance from a localized source in a one-dimensional domain. In the static case (no gating), the steady-state attenuation is given by an exponential with length constant √{D /γ }, where D is the diffusivity. We show that gating leads to slower, nonexponential attenuation. We also explore statistical correlations between particles due to the fact that they all diffuse in the same switching environment. Such correlations can be determined in terms of moments of the solution to a corresponding stochastic Fokker-Planck equation.
Numerical methods for stochastic differential equations
NASA Astrophysics Data System (ADS)
Kloeden, Peter; Platen, Eckhard
1991-06-01
The numerical analysis of stochastic differential equations differs significantly from that of ordinary differential equations due to the peculiarities of stochastic calculus. This book provides an introduction to stochastic calculus and stochastic differential equations, both theory and applications. The main emphasise is placed on the numerical methods needed to solve such equations. It assumes an undergraduate background in mathematical methods typical of engineers and physicists, through many chapters begin with a descriptive summary which may be accessible to others who only require numerical recipes. To help the reader develop an intuitive understanding of the underlying mathematicals and hand-on numerical skills exercises and over 100 PC Exercises (PC-personal computer) are included. The stochastic Taylor expansion provides the key tool for the systematic derivation and investigation of discrete time numerical methods for stochastic differential equations. The book presents many new results on higher order methods for strong sample path approximations and for weak functional approximations, including implicit, predictor-corrector, extrapolation and variance-reduction methods. Besides serving as a basic text on such methods. the book offers the reader ready access to a large number of potential research problems in a field that is just beginning to expand rapidly and is widely applicable.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1
Stochastic effects in a seasonally forced epidemic model
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.
2010-10-01
The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.
Stochastic modelling of microstructure formation in solidification processes
NASA Astrophysics Data System (ADS)
Nastac, Laurentiu; Stefanescu, Doru M.
1997-07-01
To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'
Stochastic and deterministic models for agricultural production networks.
Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D
2007-07-01
An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Coupling all-atom molecular dynamics simulations of ions in water with Brownian dynamics.
Erban, Radek
2016-02-01
Molecular dynamics (MD) simulations of ions (K + , Na + , Ca 2+ and Cl - ) in aqueous solutions are investigated. Water is described using the SPC/E model. A stochastic coarse-grained description for ion behaviour is presented and parametrized using MD simulations. It is given as a system of coupled stochastic and ordinary differential equations, describing the ion position, velocity and acceleration. The stochastic coarse-grained model provides an intermediate description between all-atom MD simulations and Brownian dynamics (BD) models. It is used to develop a multiscale method which uses all-atom MD simulations in parts of the computational domain and (less detailed) BD simulations in the remainder of the domain.
Stochastic hybrid delay population dynamics: well-posed models and extinction.
Yuan, Chenggui; Mao, Xuerong; Lygeros, John
2009-01-01
Nonlinear differential equations have been used for decades for studying fluctuations in the populations of species, interactions of species with the environment, and competition and symbiosis between species. Over the years, the original non-linear models have been embellished with delay terms, stochastic terms and more recently discrete dynamics. In this paper, we investigate stochastic hybrid delay population dynamics (SHDPD), a very general class of population dynamics that comprises all of these phenomena. For this class of systems, we provide sufficient conditions to ensure that SHDPD have global positive, ultimately bounded solutions, a minimum requirement for a realistic, well-posed model. We then study the question of extinction and establish conditions under which an ecosystem modelled by SHDPD is doomed.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Stability of continuous-time quantum filters with measurement imperfections
NASA Astrophysics Data System (ADS)
Amini, H.; Pellegrini, C.; Rouchon, P.
2014-07-01
The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
NASA Astrophysics Data System (ADS)
Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin
1998-08-01
The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.
Ackerman, David M; Wang, Jing; Wendel, Joseph H; Liu, Da-Jiang; Pruski, Marek; Evans, James W
2011-03-21
We analyze the spatiotemporal behavior of species concentrations in a diffusion-mediated conversion reaction which occurs at catalytic sites within linear pores of nanometer diameter. Diffusion within the pores is subject to a strict single-file (no passing) constraint. Both transient and steady-state behavior is precisely characterized by kinetic Monte Carlo simulations of a spatially discrete lattice-gas model for this reaction-diffusion process considering various distributions of catalytic sites. Exact hierarchical master equations can also be developed for this model. Their analysis, after application of mean-field type truncation approximations, produces discrete reaction-diffusion type equations (mf-RDE). For slowly varying concentrations, we further develop coarse-grained continuum hydrodynamic reaction-diffusion equations (h-RDE) incorporating a precise treatment of single-file diffusion in this multispecies system. The h-RDE successfully describe nontrivial aspects of transient behavior, in contrast to the mf-RDE, and also correctly capture unreactive steady-state behavior in the pore interior. However, steady-state reactivity, which is localized near the pore ends when those regions are catalytic, is controlled by fluctuations not incorporated into the hydrodynamic treatment. The mf-RDE partly capture these fluctuation effects, but cannot describe scaling behavior of the reactivity.
Xu, Jason; Guttorp, Peter; Kato-Maeda, Midori; Minin, Vladimir N
2015-12-01
Continuous-time birth-death-shift (BDS) processes are frequently used in stochastic modeling, with many applications in ecology and epidemiology. In particular, such processes can model evolutionary dynamics of transposable elements-important genetic markers in molecular epidemiology. Estimation of the effects of individual covariates on the birth, death, and shift rates of the process can be accomplished by analyzing patient data, but inferring these rates in a discretely and unevenly observed setting presents computational challenges. We propose a multi-type branching process approximation to BDS processes and develop a corresponding expectation maximization algorithm, where we use spectral techniques to reduce calculation of expected sufficient statistics to low-dimensional integration. These techniques yield an efficient and robust optimization routine for inferring the rates of the BDS process, and apply broadly to multi-type branching processes whose rates can depend on many covariates. After rigorously testing our methodology in simulation studies, we apply our method to study intrapatient time evolution of IS6110 transposable element, a genetic marker frequently used during estimation of epidemiological clusters of Mycobacterium tuberculosis infections. © 2015, The International Biometric Society.
The stochastic system approach for estimating dynamic treatments effect.
Commenges, Daniel; Gégout-Petit, Anne
2015-10-01
The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.
Variance decomposition in stochastic simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less
Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes
NASA Technical Reports Server (NTRS)
Williams Colin P.
1999-01-01
Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However
ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION
2016-03-24
ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
Qubit models of weak continuous measurements: markovian conditional and open-system dynamics
NASA Astrophysics Data System (ADS)
Gross, Jonathan A.; Caves, Carlton M.; Milburn, Gerard J.; Combes, Joshua
2018-04-01
In this paper we approach the theory of continuous measurements and the associated unconditional and conditional (stochastic) master equations from the perspective of quantum information and quantum computing. We do so by showing how the continuous-time evolution of these master equations arises from discretizing in time the interaction between a system and a probe field and by formulating quantum-circuit diagrams for the discretized evolution. We then reformulate this interaction by replacing the probe field with a bath of qubits, one for each discretized time segment, reproducing all of the standard quantum-optical master equations. This provides an economical formulation of the theory, highlighting its fundamental underlying assumptions.
A Simulation of Alternatives for Wholesale Inventory Replenishment
2016-03-01
algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item
Hyman, Jeffrey De'Haven; Aldrich, Garrett Allen; Viswanathan, Hari S.; ...
2016-08-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected somore » that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. Lastly, these observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Aldrich, Garrett Allen; Viswanathan, Hari S.
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected somore » that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. Lastly, these observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.« less
Distributed MPC based consensus for single-integrator multi-agent systems.
Cheng, Zhaomeng; Fan, Ming-Can; Zhang, Hai-Tao
2015-09-01
This paper addresses model predictive control schemes for consensus in multi-agent systems (MASs) with discrete-time single-integrator dynamics under switching directed interaction graphs. The control horizon is extended to be greater than one which endows the closed-loop system with extra degree of freedom. We derive sufficient conditions on the sampling period and the interaction graph to achieve consensus by using the property of infinite products of stochastic matrices. Consensus can be achieved asymptotically if the sampling period is selected such that the interaction graph among agents has a directed spanning tree jointly. Significantly, if the interaction graph always has a spanning tree, one can select an arbitrary large sampling period to guarantee consensus. Finally, several simulations are conducted to illustrate the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Dynamic Noise and its Role in Understanding Epidemiological Processes
NASA Astrophysics Data System (ADS)
Stollenwerk, Nico; Aguiar, Maíra
2010-09-01
We investigate the role of dynamic noise in understanding epidemiological systems, such as influenza or dengue fever by deriving stochastic ordinary differential equations from markov processes for discrete populations. This approach allows for an easy analysis of dynamical noise transitions between co-existing attractors.
2012-05-01
noise (AGN) [1] and [11]. We focus on threshold communication systems due to the underwater environment, noncoherent communication techniques are...the threshold level. In the context of the underwater communications, where noncoherent communication techniques are affected both by noise and
Stochastic resetting in backtrack recovery by RNA polymerases
NASA Astrophysics Data System (ADS)
Roldán, Édgar; Lisica, Ana; Sánchez-Taltavull, Daniel; Grill, Stephan W.
2016-06-01
Transcription is a key process in gene expression, in which RNA polymerases produce a complementary RNA copy from a DNA template. RNA polymerization is frequently interrupted by backtracking, a process in which polymerases perform a random walk along the DNA template. Recovery of polymerases from the transcriptionally inactive backtracked state is determined by a kinetic competition between one-dimensional diffusion and RNA cleavage. Here we describe backtrack recovery as a continuous-time random walk, where the time for a polymerase to recover from a backtrack of a given depth is described as a first-passage time of a random walker to reach an absorbing state. We represent RNA cleavage as a stochastic resetting process and derive exact expressions for the recovery time distributions and mean recovery times from a given initial backtrack depth for both continuous and discrete-lattice descriptions of the random walk. We show that recovery time statistics do not depend on the discreteness of the DNA lattice when the rate of one-dimensional diffusion is large compared to the rate of cleavage.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Discrete stochastic charging of aggregate grains
NASA Astrophysics Data System (ADS)
Matthews, Lorin S.; Shotorban, Babak; Hyde, Truell W.
2018-05-01
Dust particles immersed in a plasma environment become charged through the collection of electrons and ions at random times, causing the dust charge to fluctuate about an equilibrium value. Small grains (with radii less than 1 μm) or grains in a tenuous plasma environment are sensitive to single additions of electrons or ions. Here we present a numerical model that allows examination of discrete stochastic charge fluctuations on the surface of aggregate grains and determines the effect of these fluctuations on the dynamics of grain aggregation. We show that the mean and standard deviation of charge on aggregate grains follow the same trends as those predicted for spheres having an equivalent radius, though aggregates exhibit larger variations from the predicted values. In some plasma environments, these charge fluctuations occur on timescales which are relevant for dynamics of aggregate growth. Coupled dynamics and charging models show that charge fluctuations tend to produce aggregates which are much more linear or filamentary than aggregates formed in an environment where the charge is stationary.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project
Boore, David
2015-01-01
Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
Arruda, Andréia Gonçalves; Friendship, Robert; Carpenter, Jane; Greer, Amy; Poljak, Zvonimir
2016-01-01
The objective of this study was to develop a discrete event agent-based stochastic model to explore the likelihood of the occurrence of porcine reproductive and respiratory syndrome (PRRS) outbreaks in swine herds with different PRRS control measures in place. The control measures evaluated included vaccination with a modified-live attenuated vaccine and live-virus inoculation of gilts, and both were compared to a baseline scenario where no control measures were in place. A typical North American 1,000-sow farrow-to-wean swine herd was used as a model, with production and disease parameters estimated from the literature and expert opinion. The model constructed herein was not only able to capture individual animal heterogeneity in immunity to and shedding of the PRRS virus, but also the dynamic animal flow and contact structure typical in such herds under field conditions. The model outcomes included maximum number of females infected per simulation, and time at which that happened and the incidence of infected weaned piglets during the first year of challenge-virus introduction. Results showed that the baseline scenario produced a larger percentage of simulations resulting in outbreaks compared to the control scenarios, and interestingly some of the outbreaks occurred over long periods after virus introduction. The live-virus inoculation scenario showed promising results, with fewer simulations resulting in outbreaks than the other scenarios, but the negative impacts of maintaining a PRRS-positive population should be considered. Finally, under the assumptions of the current model, neither of the control strategies prevented the infection from spreading to the piglet population, which highlights the importance of maintaining internal biosecurity practices at the farrowing room level.
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Activating clinical trials: a process improvement approach.
Martinez, Diego A; Tsalatsanis, Athanasios; Yalcin, Ali; Zayas-Castro, José L; Djulbegovic, Benjamin
2016-02-24
The administrative process associated with clinical trial activation has been criticized as costly, complex, and time-consuming. Prior research has concentrated on identifying administrative barriers and proposing various solutions to reduce activation time, and consequently associated costs. Here, we expand on previous research by incorporating social network analysis and discrete-event simulation to support process improvement decision-making. We searched for all operational data associated with the administrative process of activating industry-sponsored clinical trials at the Office of Clinical Research of the University of South Florida in Tampa, Florida. We limited the search to those trials initiated and activated between July 2011 and June 2012. We described the process using value stream mapping, studied the interactions of the various process participants using social network analysis, and modeled potential process modifications using discrete-event simulation. The administrative process comprised 5 sub-processes, 30 activities, 11 decision points, 5 loops, and 8 participants. The mean activation time was 76.6 days. Rate-limiting sub-processes were those of contract and budget development. Key participants during contract and budget development were the Office of Clinical Research, sponsors, and the principal investigator. Simulation results indicate that slight increments on the number of trials, arriving to the Office of Clinical Research, would increase activation time by 11 %. Also, incrementing the efficiency of contract and budget development would reduce the activation time by 28 %. Finally, better synchronization between contract and budget development would reduce time spent on batching documentation; however, no improvements would be attained in total activation time. The presented process improvement analytic framework not only identifies administrative barriers, but also helps to devise and evaluate potential improvement scenarios. The strength of our framework lies in its system analysis approach that recognizes the stochastic duration of the activation process and the interdependence between process activities and entities.
Parallel multiscale simulations of a brain aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm.more » The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.« less
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.
Simulation of lithium ion battery replacement in a battery pack for application in electric vehicles
NASA Astrophysics Data System (ADS)
Mathew, M.; Kong, Q. H.; McGrory, J.; Fowler, M.
2017-05-01
The design and optimization of the battery pack in an electric vehicle (EV) is essential for continued integration of EVs into the global market. Reconfigurable battery packs are of significant interest lately as they allow for damaged cells to be removed from the circuit, limiting their impact on the entire pack. This paper provides a simulation framework that models a battery pack and examines the effect of replacing damaged cells with new ones. The cells within the battery pack vary stochastically and the performance of the entire pack is evaluated under different conditions. The results show that by changing out cells in the battery pack, the state of health of the pack can be consistently maintained above a certain threshold value selected by the user. In situations where the cells are checked for replacement at discrete intervals, referred to as maintenance event intervals, it is found that the length of the interval is dependent on the mean time to failure of the individual cells. The simulation framework as well as the results from this paper can be utilized to better optimize lithium ion battery pack design in EVs and make long term deployment of EVs more economically feasible.
Computational methods for diffusion-influenced biochemical reactions.
Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G
2007-08-01
We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations
2016-06-01
WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT...In this study, a discrete event simulation (DES) was built by modeling ships, and their sensors and weapons, to simulate convoy operations under
Partial Ordering and Stochastic Resonance in Discrete Memoryless Channels
2012-05-01
Methods for Underwater Wireless Sensor Networks”, which is to analyze and develop noncoherent communication methods at the physical layer for target...Capacity Behavior for Simple Models of Optical Fiber Communication,” 8 th International conf. on Communications, COMM 2010, Bucharest, pp.1-6, July 2010
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul
2017-12-01
This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.
Winkelmann, Stefanie; Schütte, Christof
2017-09-21
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Scaling theory for the quasideterministic limit of continuous bifurcations.
Kessler, David A; Shnerb, Nadav M
2012-05-01
Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
Integral projection models for finite populations in a stochastic environment.
Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik
2011-05-01
Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.
NASA Astrophysics Data System (ADS)
Winkelmann, Stefanie; Schütte, Christof
2017-09-01
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
The ISI distribution of the stochastic Hodgkin-Huxley neuron.
Rowat, Peter F; Greenwood, Priscilla E
2014-01-01
The simulation of ion-channel noise has an important role in computational neuroscience. In recent years several approximate methods of carrying out this simulation have been published, based on stochastic differential equations, and all giving slightly different results. The obvious, and essential, question is: which method is the most accurate and which is most computationally efficient? Here we make a contribution to the answer. We compare interspike interval histograms from simulated data using four different approximate stochastic differential equation (SDE) models of the stochastic Hodgkin-Huxley neuron, as well as the exact Markov chain model simulated by the Gillespie algorithm. One of the recent SDE models is the same as the Kurtz approximation first published in 1978. All the models considered give similar ISI histograms over a wide range of deterministic and stochastic input. Three features of these histograms are an initial peak, followed by one or more bumps, and then an exponential tail. We explore how these features depend on deterministic input and on level of channel noise, and explain the results using the stochastic dynamics of the model. We conclude with a rough ranking of the four SDE models with respect to the similarity of their ISI histograms to the histogram of the exact Markov chain model.
NASA Astrophysics Data System (ADS)
Loschko, Matthias; Wöhling, Thomas; Rudolph, David L.; Cirpka, Olaf A.
2018-01-01
Many groundwater contaminants react with components of the aquifer matrix, causing a depletion of the aquifer's reactivity with time. We discuss conceptual simplifications of reactive transport that allow the implementation of a decreasing reaction potential in reactive-transport simulations in chemically and hydraulically heterogeneous aquifers without relying on a fully explicit description. We replace spatial coordinates by travel-times and use the concept of relative reactivity, which represents the reaction-partner supply from the matrix relative to a reference. Microorganisms facilitating the reactions are not explicitly modeled. Solute mixing is neglected. Streamlines, obtained by particle tracking, are discretized in travel-time increments with variable content of reaction partners in the matrix. As exemplary reactive system, we consider aerobic respiration and denitrification with simplified reaction equations: Dissolved oxygen undergoes conditional zero-order decay, nitrate follows first-order decay, which is inhibited in the presence of dissolved oxygen. Both reactions deplete the bioavailable organic carbon of the matrix, which in turn determines the relative reactivity. These simplifications reduce the computational effort, facilitating stochastic simulations of reactive transport on the aquifer scale. In a one-dimensional test case with a more detailed description of the reactions, we derive a potential relationship between the bioavailable organic-carbon content and the relative reactivity. In a three-dimensional steady-state test case, we use the simplified model to calculate the decreasing denitrification potential of an artificial aquifer over 200 years in an ensemble of 200 members. We demonstrate that the uncertainty in predicting the nitrate breakthrough in a heterogeneous aquifer decreases with increasing scale of observation.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915