Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen
2013-07-01
Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.
NASA Astrophysics Data System (ADS)
Chai, Bian-fang; Yu, Jian; Jia, Cai-yan; Yang, Tian-bao; Jiang, Ya-wen
2013-07-01
Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
NASA Astrophysics Data System (ADS)
Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea
2018-02-01
The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Nonparametric weighted stochastic block models
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
2018-01-01
We present a Bayesian formulation of weighted stochastic block models that can be used to infer the large-scale modular structure of weighted networks, including their hierarchical organization. Our method is nonparametric, and thus does not require the prior knowledge of the number of groups or other dimensions of the model, which are instead inferred from data. We give a comprehensive treatment of different kinds of edge weights (i.e., continuous or discrete, signed or unsigned, bounded or unbounded), as well as arbitrary weight transformations, and describe an unsupervised model selection approach to choose the best network description. We illustrate the application of our method to a variety of empirical weighted networks, such as global migrations, voting patterns in congress, and neural connections in the human brain.
Clustering network layers with the strata multilayer stochastic block model.
Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J
2016-01-01
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the "strata multilayer stochastic block model" (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called "strata", which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
NASA Astrophysics Data System (ADS)
Li, Fei; Subramanian, Kartik; Chen, Minghan; Tyson, John J.; Cao, Yang
2016-06-01
The asymmetric cell division cycle in Caulobacter crescentus is controlled by an elaborate molecular mechanism governing the production, activation and spatial localization of a host of interacting proteins. In previous work, we proposed a deterministic mathematical model for the spatiotemporal dynamics of six major regulatory proteins. In this paper, we study a stochastic version of the model, which takes into account molecular fluctuations of these regulatory proteins in space and time during early stages of the cell cycle of wild-type Caulobacter cells. We test the stochastic model with regard to experimental observations of increased variability of cycle time in cells depleted of the divJ gene product. The deterministic model predicts that overexpression of the divK gene blocks cell cycle progression in the stalked stage; however, stochastic simulations suggest that a small fraction of the mutants cells do complete the cell cycle normally.
Clustering network layers with the strata multilayer stochastic block model
Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J.
2016-01-01
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the “strata multilayer stochastic block model” (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called “strata”, which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project. PMID:28435844
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, F.P.; Dai, J.; Kerans, C.
1998-11-01
In part 1 of this paper, the authors discussed the rock-fabric/petrophysical classes for dolomitized carbonate-ramp rocks, the effects of rock fabric and pore type on petrophysical properties, petrophysical models for analyzing wireline logs, the critical scales for defining geologic framework, and 3-D geologic modeling. Part 2 focuses on geophysical and engineering characterizations, including seismic modeling, reservoir geostatistics, stochastic modeling, and reservoir simulation. Synthetic seismograms of 30 to 200 Hz were generated to study the level of seismic resolution required to capture the high-frequency geologic features in dolomitized carbonate-ramp reservoirs. Outcrop data were collected to investigate effects of sampling interval andmore » scale-up of block size on geostatistical parameters. Semivariogram analysis of outcrop data showed that the sill of log permeability decreases and the correlation length increases with an increase of horizontal block size. Permeability models were generated using conventional linear interpolation, stochastic realizations without stratigraphic constraints, and stochastic realizations with stratigraphic constraints. Simulations of a fine-scale Lawyer Canyon outcrop model were used to study the factors affecting waterflooding performance. Simulation results show that waterflooding performance depends strongly on the geometry and stacking pattern of the rock-fabric units and on the location of production and injection wells.« less
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Earthquake nucleation in a stochastic fault model of globally coupled units with interaction delays
NASA Astrophysics Data System (ADS)
Vasović, Nebojša; Kostić, Srđan; Franović, Igor; Todorović, Kristina
2016-09-01
In present paper we analyze dynamics of fault motion by considering delayed interaction of 100 all-to-all coupled blocks with rate-dependent friction law in presence of random seismic noise. Such a model sufficiently well describes a real fault motion, whose prevailing stochastic nature is implied by surrogate data analysis of available GPS measurements of active fault movement. Interaction of blocks in an analyzed model is studied as a function of time delay, observed both for dynamics of individual faults and phenomenological models. Analyzed model is examined as a system of all-to-all coupled blocks according to typical assumption of compound faults as complex of globally coupled segments. We apply numerical methods to show that there are local bifurcations from equilibrium state to periodic oscillations, with an occurrence of irregular aperiodic behavior when initial conditions are set away from the equilibrium point. Such a behavior indicates a possible existence of a bi-stable dynamical regime, due to effect of the introduced seismic noise or the existence of global attractor. The latter assumption is additionally confirmed by analyzing the corresponding mean-field approximated model. In this bi-stable regime, distribution of event magnitudes follows Gutenberg-Richter power law with satisfying statistical accuracy, including the b-value within the real observed range.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Incorporating Covariates into Stochastic Blockmodels
ERIC Educational Resources Information Center
Sweet, Tracy M.
2015-01-01
Social networks in education commonly involve some form of grouping, such as friendship cliques or teacher departments, and blockmodels are a type of statistical social network model that accommodate these grouping or blocks by assuming different within-group tie probabilities than between-group tie probabilities. We describe a class of models,…
Bayesian Estimation and Inference Using Stochastic Electronics
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Snipas, Mindaugas; Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Paulauskas, Nerijus; Bukauskas, Feliksas F
2015-01-01
The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ~20 times.
NASA Astrophysics Data System (ADS)
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.
2015-01-01
The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700
GROUNDWARS 4.2 Reference Guide
1991-12-01
land duel , homogeneous forces, TANKWARS, target acquisition, combat survivability 19. ABSTRACT (Continue on reverse if necessary and identify by block...number) GROUNDWARS is a stochastic, two-sided, event-sequenced weapon systems effectiveness model which provides the results of a land duel between two...GROUNDWARS 4.2 REFERENCE GUIDE I. INTRODUCTION Groundwars is a weapon systems effectiveness model which provides the results of a land duel between
Finite-size analysis of the detectability limit of the stochastic block model
NASA Astrophysics Data System (ADS)
Young, Jean-Gabriel; Desrosiers, Patrick; Hébert-Dufresne, Laurent; Laurence, Edward; Dubé, Louis J.
2017-06-01
It has been shown in recent years that the stochastic block model is sometimes undetectable in the sparse limit, i.e., that no algorithm can identify a partition correlated with the partition used to generate an instance, if the instance is sparse enough and infinitely large. In this contribution, we treat the finite case explicitly, using arguments drawn from information theory and statistics. We give a necessary condition for finite-size detectability in the general SBM. We then distinguish the concept of average detectability from the concept of instance-by-instance detectability and give explicit formulas for both definitions. Using these formulas, we prove that there exist large equivalence classes of parameters, where widely different network ensembles are equally detectable with respect to our definitions of detectability. In an extensive case study, we investigate the finite-size detectability of a simplified variant of the SBM, which encompasses a number of important models as special cases. These models include the symmetric SBM, the planted coloring model, and more exotic SBMs not previously studied. We conclude with three appendices, where we study the interplay of noise and detectability, establish a connection between our information-theoretic approach and random matrix theory, and provide proofs of some of the more technical results.
Universal phase transition in community detectability under a stochastic block model.
Chen, Pin-Yu; Hero, Alfred O
2015-03-01
We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.
Quantification of Hepatitis C Virus Cell-to-Cell Spread Using a Stochastic Modeling Approach
Martin, Danyelle N.; Perelson, Alan S.; Dahari, Harel
2015-01-01
ABSTRACT It has been proposed that viral cell-to-cell transmission plays a role in establishing and maintaining chronic infections. Thus, understanding the mechanisms and kinetics of cell-to-cell spread is fundamental to elucidating the dynamics of infection and may provide insight into factors that determine chronicity. Because hepatitis C virus (HCV) spreads from cell to cell and has a chronicity rate of up to 80% in exposed individuals, we examined the dynamics of HCV cell-to-cell spread in vitro and quantified the effect of inhibiting individual host factors. Using a multidisciplinary approach, we performed HCV spread assays and assessed the appropriateness of different stochastic models for describing HCV focus expansion. To evaluate the effect of blocking specific host cell factors on HCV cell-to-cell transmission, assays were performed in the presence of blocking antibodies and/or small-molecule inhibitors targeting different cellular HCV entry factors. In all experiments, HCV-positive cells were identified by immunohistochemical staining and the number of HCV-positive cells per focus was assessed to determine focus size. We found that HCV focus expansion can best be explained by mathematical models assuming focus size-dependent growth. Consistent with previous reports suggesting that some factors impact HCV cell-to-cell spread to different extents, modeling results estimate a hierarchy of efficacies for blocking HCV cell-to-cell spread when targeting different host factors (e.g., CLDN1 > NPC1L1 > TfR1). This approach can be adapted to describe focus expansion dynamics under a variety of experimental conditions as a means to quantify cell-to-cell transmission and assess the impact of cellular factors, viral factors, and antivirals. IMPORTANCE The ability of viruses to efficiently spread by direct cell-to-cell transmission is thought to play an important role in the establishment and maintenance of viral persistence. As such, elucidating the dynamics of cell-to-cell spread and quantifying the effect of blocking the factors involved has important implications for the design of potent antiviral strategies and controlling viral escape. Mathematical modeling has been widely used to understand HCV infection dynamics and treatment response; however, these models typically assume only cell-free virus infection mechanisms. Here, we used stochastic models describing focus expansion as a means to understand and quantify the dynamics of HCV cell-to-cell spread in vitro and determined the degree to which cell-to-cell spread is reduced when individual HCV entry factors are blocked. The results demonstrate the ability of this approach to recapitulate and quantify cell-to-cell transmission, as well as the impact of specific factors and potential antivirals. PMID:25833046
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
The Two-On-One Stochastic Duel
1983-12-01
ACN 67500 TRASANA-TR-43-83 (.0 (v THE TWO-ON-ONE STOCHASTIC DUEL I • Prepared By A.V. Gafarian C.J. Ancker, Jr. DECEMBER 19833D I°"’" " TIC ELECTE...83 M A IL / _ _ 4. TITLE (and Subtitle) TYPE OF REPORT & PERIOD CO\\,ERED The Two-On-One Stochastic Duel Final Report 6. PERFORMING ORG. REPORT NUMBER...Stochastic Duels , Stochastic Processed, and Attrition. 5-14cIa~c fal roLCS-e ss 120. ABSTRACT (C’ntfMte am reverse Ed& if necesemay and idemtitf by block
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Slip Rates of Main Active Fault Zones Through Turkey Inferred From GPS Observations
NASA Astrophysics Data System (ADS)
Ozener, H.; Aktug, B.; Dogru, A.; Tasci, L.; Acar, M.; Emre, O.; Yilmaz, O.; Turgut, B.; Halicioglu, K.; Sabuncu, A.; Bal, O.; Eraslan, A.
2015-12-01
Active Fault Map of Turkey was revised and published by General Directorate of Mineral Research and Exploration in 2012. This map reveals that there are about 500 faults can generate earthquakes.In order to understand the earthquake potential of these faults, it is needed to determine the slip rates. Although many regional and local studies were performed in the past, the slip rates of the active faults in Turkey have not been determined. In this study, the block modelling, which is the most common method to produce slip rates, will be done. GPS velocities required for block modeling is being compiled from the published studies and the raw data provided then velocity field is combined. To form a homogeneous velocity field, different stochastic models will be used and the optimal velocity field will be achieved. In literature, GPS site velocities, which are computed for different purposes and published, are combined globally and this combined velocity field are used in the analysis of strain accumulation. It is also aimed to develop optimal stochastic models to combine the velocity data. Real time, survey mode and published GPS observations is being combined in this study. We also perform new GPS observations. Furthermore, micro blocks and main fault zones from Active Fault Map Turkey will be determined and homogeneous velocity field will be used to infer slip rates of these active faults. Here, we present the result of first year of the study. This study is being supported by THE SCIENTIFIC AND TECHNOLOGICAL RESEARCH COUNCIL OF TURKEY (TUBITAK)-CAYDAG with grant no. 113Y430.
1990-10-01
REPORT DOCUMENTATION PAGE OMBNo. 0704-0188 la. REPORT SECURITY CLASS-- CAT ;ON lb RESTRICTIVE MARKINGS Unclassified 2a. SECURITY CLASSiFICATION AUTHORITr 3...if necessary and identify by block number) FIELD GROUP SUB-GROUP CAT , Canadajn Army Trophy, International, Competition, --Gunnery, Tank--T C...accuracy, firing speed, and detection on score during the CAT competition. This study uses a custom stochastic computer model designed to replicate the
Protocols for Copying and Proofreading in Template-Assisted Polymerization
NASA Astrophysics Data System (ADS)
Pigolotti, Simone; Sartori, Pablo
2016-03-01
We discuss how information encoded in a template polymer can be stochastically copied into a copy polymer. We consider four different stochastic copy protocols of increasing complexity, inspired by building blocks of the mRNA translation pathway. In the first protocol, monomer incorporation occurs in a single stochastic transition. We then move to a more elaborate protocol in which an intermediate step can be used for error correction. Finally, we discuss the operating regimes of two kinetic proofreading protocols: one in which proofreading acts from the final copying step, and one in which it acts from an intermediate step. We review known results for these models and, in some cases, extend them to analyze all possible combinations of energetic and kinetic discrimination. We show that, in each of these protocols, only a limited number of these combinations leads to an improvement of the overall copying accuracy.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim
2016-04-01
The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Accommodating the ecological fallacy in disease mapping in the absence of individual exposures.
Wang, Feifei; Wang, Jian; Gelfand, Alan; Li, Fan
2017-12-30
In health exposure modeling, in particular, disease mapping, the ecological fallacy arises because the relationship between aggregated disease incidence on areal units and average exposure on those units differs from the relationship between the event of individual incidence and the associated individual exposure. This article presents a novel modeling approach to address the ecological fallacy in the least informative data setting. We assume the known population at risk with an observed incidence for a collection of areal units and, separately, environmental exposure recorded during the period of incidence at a collection of monitoring stations. We do not assume any partial individual level information or random allocation of individuals to observed exposures. We specify a conceptual incidence surface over the study region as a function of an exposure surface resulting in a stochastic integral of the block average disease incidence. The true block level incidence is an unavailable Monte Carlo integration for this stochastic integral. We propose an alternative manageable Monte Carlo integration for the integral. Modeling in this setting is immediately hierarchical, and we fit our model within a Bayesian framework. To alleviate the resulting computational burden, we offer 2 strategies for efficient model fitting: one is through modularization, the other is through sparse or dimension-reduced Gaussian processes. We illustrate the performance of our model with simulations based on a heat-related mortality dataset in Ohio and then analyze associated real data. Copyright © 2017 John Wiley & Sons, Ltd.
Theory of Stochastic Duels - Miscellaneous Results
1978-03-01
TECHNICAL MEMORANDUM 2-77, "THEORY OF STOCHASTIC DUELS - MISCELLANEOUS RESULTS"______________ 6. PERFORMING ORG. REPORT NUMBER _USA TRASANA 7. AUT)IOR...Identify by block number) This memorandum presents particular applications of various aspects of the theory of stochastic duels that the author has...Marksman Problem with Erlang n Firing Time 1 Distribution 2.3 Tactical Equity Duel with Erlang 2 Firing Times 4 2.4 Different Tactical Equity Duel 6 S2.5
Procedure for assessing the performance of a rockfall fragmentation model
NASA Astrophysics Data System (ADS)
Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert
2017-04-01
A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the final parameters selected after the calibration process are a compromise which meet all considered criteria. This methodology has been tested in some recent rockfall where high fragmentation was observed. The RockGIS tool and the fragmentation laws using data collected from recent rockfall have been developed within the RockRisk project (2014-2016, BIA2013-42582-P). This project was funded by the Spanish Ministerio de Economía y Competitividad.
Precursor processes of human self-initiated action.
Khalighinejad, Nima; Schurger, Aaron; Desantis, Andrea; Zmigrod, Leor; Haggard, Patrick
2018-01-15
A gradual buildup of electrical potential over motor areas precedes self-initiated movements. Recently, such "readiness potentials" (RPs) were attributed to stochastic fluctuations in neural activity. We developed a new experimental paradigm that operationalized self-initiated actions as endogenous 'skip' responses while waiting for target stimuli in a perceptual decision task. We compared these to a block of trials where participants could not choose when to skip, but were instead instructed to skip. Frequency and timing of motor action were therefore balanced across blocks, so that conditions differed only in how the timing of skip decisions was generated. We reasoned that across-trial variability of EEG could carry as much information about the source of skip decisions as the mean RP. EEG variability decreased more markedly prior to self-initiated compared to externally-triggered skip actions. This convergence suggests a consistent preparatory process prior to self-initiated action. A leaky stochastic accumulator model could reproduce this convergence given the additional assumption of a systematic decrease in input noise prior to self-initiated actions. Our results may provide a novel neurophysiological perspective on the topical debate regarding whether self-initiated actions arise from a deterministic neurocognitive process, or from neural stochasticity. We suggest that the key precursor of self-initiated action may manifest as a reduction in neural noise. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Many-Versus-Many Stochastic Duels
1984-01-14
MANY-VERSUS-MANY STOCHAS!’IC DUELS FINAL REPORT cO C. J, ANCKER, JR. 00 A. V. GAFARIAN JANUARY 14, 1985 U. S, ARMY RESEARCH OFFICE CONTRACT/DAAG29-81...Y) n- N/A 14. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Final, 21 September 1981 MANY-VERSUS-MANY STOCHASTIC DUELS through 20 September...necessary and Identify by block number) -. Stochastic) Duels ) Many-Versus-Many) Bibliography, 2(L ABSTRACT (Centhoue so reverse sEsfl R necessay and
Stimulus Characteristics for Vestibular Stochastic Resonance to Improve Balance Function
NASA Technical Reports Server (NTRS)
Mulavara, Ajitkumar; Fiedler, Matthew; Kofman, Igor; Peters, Brian; Wood, Scott; Serrado, Jorge; Cohen, Helen; Reschke, Millard; Bloomberg, Jacob
2010-01-01
Stochastic resonance (SR) is a mechanism by which noise can enhance the response of neural systems to relevant sensory signals. Studies have shown that imperceptible stochastic vestibular electrical stimulation, when applied to normal young and elderly subjects, significantly improved their ocular stabilization reflexes in response to whole-body tilt as well as balance performance during postural disturbances. The goal of this study was to optimize the amplitude characteristics of the stochastic vestibular signals for balance performance during standing on an unstable surface. Subjects performed a standard balance task of standing on a block of foam with their eyes closed. Bipolar stochastic electrical stimulation was applied to the vestibular system using constant current stimulation through electrodes placed over the mastoid process behind the ears. Amplitude of the signals varied in the range of 0-700 microamperes. Balance performance was measured using a force plate under the foam block, and inertial motion sensors were placed on the torso and head. Balance performance with stimulation was significantly greater (10%-25%) than with no stimulation. The signal amplitude at which performance was maximized was in the range of 100-300 microamperes. Optimization of the amplitude of the stochastic signals for maximizing balance performance will have a significant impact on development of vestibular SR as a unique system to aid recovery of function in astronauts after long-duration space flight or in patients with balance disorders.
Groupies in multitype random graphs.
Shang, Yilun
2016-01-01
A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.
2017-03-01
The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).
Nonparametric Bayesian inference of the microcanonical stochastic block model
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
2017-01-01
A principled approach to characterize the hidden modular structure of networks is to formulate generative models and then infer their parameters from data. When the desired structure is composed of modules or "communities," a suitable choice for this task is the stochastic block model (SBM), where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. Here, we present a nonparametric Bayesian method to infer the modular structure of empirical networks, including the number of modules and their hierarchical organization. We focus on a microcanonical variant of the SBM, where the structure is imposed via hard constraints, i.e., the generated networks are not allowed to violate the patterns imposed by the model. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: (1) deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, which not only remove limitations that seriously degrade the inference on large networks but also reveal structures at multiple scales; (2) a very efficient inference algorithm that scales well not only for networks with a large number of nodes and edges but also with an unlimited number of modules. We show also how this approach can be used to sample modular hierarchies from the posterior distribution, as well as to perform model selection. We discuss and analyze the differences between sampling from the posterior and simply finding the single parameter estimate that maximizes it. Furthermore, we expose a direct equivalence between our microcanonical approach and alternative derivations based on the canonical SBM.
Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.
Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.
Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.
2016-01-01
Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
Manual choice reaction times in the rate-domain
Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean
2014-01-01
Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
Observation and analysis of the Coulter effect through carbon nanotube and graphene nanopores.
Agrawal, Kumar Varoon; Drahushuk, Lee W; Strano, Michael S
2016-02-13
Carbon nanotubes (CNTs) and graphene are the rolled and flat analogues of graphitic carbon, respectively, with hexagonal crystalline lattices, and show exceptional molecular transport properties. The empirical study of a single isolated nanopore requires, as evidence, the observation of stochastic, telegraphic noise from a blocking molecule commensurate in size with the pore. This standard is used ubiquitously in patch clamp studies of single, isolated biological ion channels and a wide range of inorganic, synthetic nanopores. In this work, we show that observation and study of stochastic fluctuations for carbon nanopores, both CNTs and graphene-based, enable precision characterization of pore properties that is otherwise unattainable. In the case of voltage clamp measurements of long (0.5-1 mm) CNTs between 0.9 and 2.2 nm in diameter, Coulter blocking of cationic species reveals the complex structuring of the fluid phase for confined water in this diameter range. In the case of graphene, we have pioneered the study and the analysis of stochastic fluctuations in gas transport from a pressurized, graphene-covered micro-well compartment that reveal switching between different values of the membrane permeance attributed to chemical rearrangements of individual graphene pores. This analysis remains the only way to study such single isolated graphene nanopores under these realistic transport conditions of pore rearrangements, in keeping with the thesis of this work. In summary, observation and analysis of Coulter blocking or stochastic fluctuations of permeating flux is an invaluable tool to understand graphene and graphitic nanopores including CNTs. © 2015 The Author(s).
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
NASA Technical Reports Server (NTRS)
Mulavara, Ajitkumar; Fiedler, Matthew; Kofman, Igor; Peters, Brian; Wood, Scott; Serrador, Jorge; Cohen, Helen; Reschke, Millard; Bloomberg, Jacob
2010-01-01
Stochastic resonance (SR) is a mechanism by which noise can assist and enhance the response of neural systems to relevant sensory signals. Application of imperceptible SR noise coupled with sensory input through the proprioceptive, visual, or vestibular sensory systems has been shown to improve motor function. Specifically, studies have shown that that vestibular electrical stimulation by imperceptible stochastic noise, when applied to normal young and elderly subjects, significantly improved their ocular stabilization reflexes in response to whole-body tilt as well as balance performance during postural disturbances. The goal of this study was to optimize the characteristics of the stochastic vestibular signals for balance performance during standing on an unstable surface. Subjects performed a standardized balance task of standing on a block of 10 cm thick medium density foam with their eyes closed for a total of 40 seconds. Stochastic electrical stimulation was applied to the vestibular system through electrodes placed over the mastoid process behind the ears during the last 20 seconds of the test period. A custom built constant current stimulator with subject isolation delivered the stimulus. Stimulation signals were generated with frequencies in the bandwidth of 1-2 Hz and 0.01-30 Hz. Amplitude of the signals were varied in the range of 0- +/-700 micro amperes with the RMS of the signal increased by 30 micro amperes for each 100 micro amperes increase in the current range. Balance performance was measured using a force plate under the foam block and inertial motion sensors placed on the torso and head segments. Preliminary results indicate that balance performance is improved in the range of 10-25% compared to no stimulation conditions. Subjects improved their performance consistently across the blocks of stimulation. Further the signal amplitude at which the performance was maximized was different in the two frequency ranges. Optimization of the frequency and amplitude of the signal characteristics of the stochastic noise signals on maximizing balance performance will have a significant impact in its development as a unique system to aid recovery of function in astronauts after long duration space flight or for people with balance disorders.
Stochasticity and Spatial Interaction Govern Stem Cell Differentiation Dynamics
NASA Astrophysics Data System (ADS)
Smith, Quinton; Stukalin, Evgeny; Kusuma, Sravanti; Gerecht, Sharon; Sun, Sean X.
2015-07-01
Stem cell differentiation underlies many fundamental processes such as development, tissue growth and regeneration, as well as disease progression. Understanding how stem cell differentiation is controlled in mixed cell populations is an important step in developing quantitative models of cell population dynamics. Here we focus on quantifying the role of cell-cell interactions in determining stem cell fate. Toward this, we monitor stem cell differentiation in adherent cultures on micropatterns and collect statistical cell fate data. Results show high cell fate variability and a bimodal probability distribution of stem cell fraction on small (80-140 μm diameter) micropatterns. On larger (225-500 μm diameter) micropatterns, the variability is also high but the distribution of the stem cell fraction becomes unimodal. Using a stochastic model, we analyze the differentiation dynamics and quantitatively determine the differentiation probability as a function of stem cell fraction. Results indicate that stem cells can interact and sense cellular composition in their immediate neighborhood and adjust their differentiation probability accordingly. Blocking epithelial cadherin (E-cadherin) can diminish this cell-cell contact mediated sensing. For larger micropatterns, cell motility adds a spatial dimension to the picture. Taken together, we find stochasticity and cell-cell interactions are important factors in determining cell fate in mixed cell populations.
Stochastic Ground Water Flow Simulation with a Fracture Zone Continuum Model
Langevin, C.D.
2003-01-01
A method is presented for incorporating the hydraulic effects of vertical fracture zones into two-dimensional cell-based continuum models of ground water flow and particle tracking. High hydraulic conductivity features are used in the model to represent fracture zones. For fracture zones that are not coincident with model rows or columns, an adjustment is required for the hydraulic conductivity value entered into the model cells to compensate for the longer flowpath through the model grid. A similar adjustment is also required for simulated travel times through model cells. A travel time error of less than 8% can occur for particles moving through fractures with certain orientations. The fracture zone continuum model uses stochastically generated fracture zone networks and Monte Carlo analysis to quantify uncertainties with simulated advective travel times. An approach is also presented for converting an equivalent continuum model into a fracture zone continuum model by establishing the contribution of matrix block transmissivity to the bulk transmissivity of the aquifer. The methods are used for a case study in west-central Florida to quantify advective travel times from a potential wetland rehydration site to a municipal supply wellfield. Uncertainties in advective travel times are assumed to result from the presence of vertical fracture zones, commonly observed on aerial photographs as photolineaments.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Stochastic Blockmodeling of the Modules and Core of the Caenorhabditis elegans Connectome
Pavlovic, Dragana M.; Vértes, Petra E.; Bullmore, Edward T.; Schafer, William R.; Nichols, Thomas E.
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4–5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the “core-in-modules” decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems. PMID:24988196
Saddle-node bifurcation to jammed state for quasi-one-dimensional counter-chemotactic flow.
Fujii, Masashi; Awazu, Akinori; Nishimori, Hiraku
2010-07-01
The transition of a counter-chemotactic particle flow from a free-flow state to a jammed state in a quasi-one-dimensional path is investigated. One of the characteristic features of such a flow is that the constituent particles spontaneously form a cluster that blocks the path, called a path-blocking cluster (PBC), and causes a jammed state when the particle density is greater than a threshold value. Near the threshold value, the PBC occasionally collapses on itself to recover the free flow. In other words, the time evolution of the size of the PBC governs the flux of a counter-chemotactic flow. In this Rapid Communication, on the basis of numerical results of a stochastic cellular automata (SCA) model, we introduce a Langevin equation model for the size evolution of the PBC that reproduces the qualitative characteristics of the SCA model. The results suggest that the emergence of the jammed state in a quasi-one-dimensional counterflow is caused by a saddle-node bifurcation.
BIPAD: A web server for modeling bipartite sequence elements
Bi, Chengpeng; Rogan, Peter K
2006-01-01
Background Many dimeric protein complexes bind cooperatively to families of bipartite nucleic acid sequence elements, which consist of pairs of conserved half-site sequences separated by intervening distances that vary among individual sites. Results We introduce the Bipad Server [1], a web interface to predict sequence elements embedded within unaligned sequences. Either a bipartite model, consisting of a pair of one-block position weight matrices (PWM's) with a gap distribution, or a single PWM matrix for contiguous single block motifs may be produced. The Bipad program performs multiple local alignment by entropy minimization and cyclic refinement using a stochastic greedy search strategy. The best models are refined by maximizing incremental information contents among a set of potential models with varying half site and gap lengths. Conclusion The web service generates information positional weight matrices, identifies binding site motifs, graphically represents the set of discovered elements as a sequence logo, and depicts the gap distribution as a histogram. Server performance was evaluated by generating a collection of bipartite models for distinct DNA binding proteins. PMID:16503993
A stochastic approach to uncertainty in the equations of MHD kinematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward G., E-mail: egphillips@math.umd.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2015-03-01
The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertaintymore » in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.« less
NASA Astrophysics Data System (ADS)
Havaej, Mohsen; Coggan, John; Stead, Doug; Elmo, Davide
2016-04-01
Rock slope geometry and discontinuity properties are among the most important factors in realistic rock slope analysis yet they are often oversimplified in numerical simulations. This is primarily due to the difficulties in obtaining accurate structural and geometrical data as well as the stochastic representation of discontinuities. Recent improvements in both digital data acquisition and incorporation of discrete fracture network data into numerical modelling software have provided better tools to capture rock mass characteristics, slope geometries and digital terrain models allowing more effective modelling of rock slopes. Advantages of using improved data acquisition technology include safer and faster data collection, greater areal coverage, and accurate data geo-referencing far exceed limitations due to orientation bias and occlusion. A key benefit of a detailed point cloud dataset is the ability to measure and evaluate discontinuity characteristics such as orientation, spacing/intensity and persistence. This data can be used to develop a discrete fracture network which can be imported into the numerical simulations to study the influence of the stochastic nature of the discontinuities on the failure mechanism. We demonstrate the application of digital terrestrial photogrammetry in discontinuity characterization and distinct element simulations within a slate quarry. An accurately geo-referenced photogrammetry model is used to derive the slope geometry and to characterize geological structures. We first show how a discontinuity dataset, obtained from a photogrammetry model can be used to characterize discontinuities and to develop discrete fracture networks. A deterministic three-dimensional distinct element model is then used to investigate the effect of some key input parameters (friction angle, spacing and persistence) on the stability of the quarry slope model. Finally, adopting a stochastic approach, discrete fracture networks are used as input for 3D distinct element simulations to better understand the stochastic nature of the geological structure and its effect on the quarry slope failure mechanism. The numerical modelling results highlight the influence of discontinuity characteristics and kinematics on the slope failure mechanism and the variability in the size and shape of the failed blocks.
Quasi-continuous stochastic simulation framework for flood modelling
NASA Astrophysics Data System (ADS)
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
Replication Origins and Timing of Temporal Replication in Budding Yeast: How to Solve the Conundrum?
Barberis, Matteo; Spiesser, Thomas W.; Klipp, Edda
2010-01-01
Similarly to metazoans, the budding yeast Saccharomyces cereviasiae replicates its genome with a defined timing. In this organism, well-defined, site-specific origins, are efficient and fire in almost every round of DNA replication. However, this strategy is neither conserved in the fission yeast Saccharomyces pombe, nor in Xenopus or Drosophila embryos, nor in higher eukaryotes, in which DNA replication initiates asynchronously throughout S phase at random sites. Temporal and spatial controls can contribute to the timing of replication such as Cdk activity, origin localization, epigenetic status or gene expression. However, a debate is going on to answer the question how individual origins are selected to fire in budding yeast. Two opposing theories were proposed: the “replicon paradigm” or “temporal program” vs. the “stochastic firing”. Recent data support the temporal regulation of origin activation, clustering origins into temporal blocks of early and late replication. Contrarily, strong evidences suggest that stochastic processes acting on origins can generate the observed kinetics of replication without requiring a temporal order. In mammalian cells, a spatiotemporal model that accounts for a partially deterministic and partially stochastic order of DNA replication has been proposed. Is this strategy the solution to reconcile the conundrum of having both organized replication timing and stochastic origin firing also for budding yeast? In this review we discuss this possibility in the light of our recent study on the origin activation, suggesting that there might be a stochastic component in the temporal activation of the replication origins, especially under perturbed conditions. PMID:21037857
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Relaxation and coarsening of weakly-interacting breathers in a simplified DNLS chain
NASA Astrophysics Data System (ADS)
Iubini, Stefano; Politi, Antonio; Politi, Paolo
2017-07-01
The discrete nonlinear Schrödinger (DNLS) equation displays a parameter region characterized by the presence of localized excitations (breathers). While their formation is well understood and it is expected that the asymptotic configuration comprises a single breather on top of a background, it is not clear why the dynamics of a multi-breather configuration is essentially frozen. In order to investigate this question, we introduce simple stochastic models, characterized by suitable conservation laws. We focus on the role of the coupling strength between localized excitations and background. In the DNLS model, higher breathers interact more weakly, as a result of their faster rotation. In our stochastic models, the strength of the coupling is controlled directly by an amplitude-dependent parameter. In the case of a power-law decrease, the associated coarsening process undergoes a slowing down if the decay rate is larger than a critical value. In the case of an exponential decrease, a freezing effect is observed that is reminiscent of the scenario observed in the DNLS. This last regime arises spontaneously when direct energy diffusion between breathers and background is blocked below a certain threshold.
Radio Occultation Investigation of the Rings of Saturn and Uranus
NASA Technical Reports Server (NTRS)
Marouf, Essam A.
1997-01-01
The proposed work addresses two main objectives: (1) to pursue the development of the random diffraction screen model for analytical/computational characterization of the extinction and near-forward scattering by ring models that include particle crowding, uniform clustering, and clustering along preferred orientations (anisotropy). The characterization is crucial for proper interpretation of past (Voyager) and future (Cassini) ring, occultation observations in terms of physical ring properties, and is needed to address outstanding puzzles in the interpretation of the Voyager radio occultation data sets; (2) to continue the development of spectral analysis techniques to identify and characterize the power scattered by all features of Saturn's rings that can be resolved in the Voyager radio occultation observations, and to use the results to constrain the maximum particle size and its abundance. Characterization of the variability of surface mass density among the main ring, features and within individual features is important for constraining the ring mass and is relevant to investigations of ring dynamics and origin. We completed the developed of the stochastic geometry (random screen) model for the interaction of electromagnetic waves with of planetary ring models; used the model to relate the oblique optical depth and the angular spectrum of the near forward scattered signal to statistical averages of the stochastic geometry of the randomly blocked area. WE developed analytical results based on the assumption of Poisson statistics for particle positions, and investigated the dependence of the oblique optical depth and angular spectrum on the fractional area blocked, vertical ring profile, and incidence angle when the volume fraction is small. Demonstrated agreement with the classical radiative transfer predictions for oblique incidence. Also developed simulation procedures to generate statistical realizations of random screens corresponding to uniformly packed ring models, and used the results to characterize dependence of the extinction and near-forward scattering on ring thickness, packing fraction, and the ring opening angle.
Teaching Reinforcement of Stochastic Behavior Using Monte Carlo Simulation.
ERIC Educational Resources Information Center
Fox, William P.; And Others
1996-01-01
Explains a proposed block of instruction that would give students in industrial engineering, operations research, systems engineering, and applied mathematics the basic understanding required to begin more advanced courses in simulation theory or applications. (DDR)
Single-Stranded Condensation Stochastically Blocks G-Quadruplex Assembly in Human Telomeric RNA.
Gutiérrez, Irene; Garavís, Miguel; de Lorenzo, Sara; Villasante, Alfredo; González, Carlos; Arias-Gonzalez, J Ricardo
2018-05-17
TERRA is an RNA molecule transcribed from human subtelomeric regions toward chromosome ends potentially involved in regulation of heterochromatin stability, semiconservative replication, and telomerase inhibition, among others. TERRA contains tandem repeats of the sequence GGGUUA, with a strong tendency to fold into a four-stranded arrangement known as a parallel G-quadruplex. Here, we demonstrate by using single-molecule force spectroscopy that this potential is limited by the inherent capacity of RNA to self-associate randomly and further condense into entropically more favorable structures. We stretched RNA constructions with more than four and less than eight hexanucleotide repeats, thus unable to form several G-quadruplexes in tandem, flanked by non-G-rich overhangs of random sequence by optical tweezers on a one by one basis. We found that condensed RNA stochastically blocks G-quadruplex folding pathways with a near 20% probability, a behavior that is not found in DNA analogous molecules.
Population activity statistics dissect subthreshold and spiking variability in V1.
Bányai, Mihály; Koman, Zsombor; Orbán, Gergő
2017-07-01
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.
Controlling the phase locking of stochastic magnetic bits for ultra-low power computation
NASA Astrophysics Data System (ADS)
Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie
2016-07-01
When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10-13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.
Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.
Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie
2016-07-26
When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.
Mechanisms of Regulation of Olfactory Transduction and Adaptation in the Olfactory Cilium
Antunes, Gabriela; Sebastião, Ana Maria; Simoes de Souza, Fabio Marques
2014-01-01
Olfactory adaptation is a fundamental process for the functioning of the olfactory system, but the underlying mechanisms regulating its occurrence in intact olfactory sensory neurons (OSNs) are not fully understood. In this work, we have combined stochastic computational modeling and a systematic pharmacological study of different signaling pathways to investigate their impact during short-term adaptation (STA). We used odorant stimulation and electroolfactogram (EOG) recordings of the olfactory epithelium treated with pharmacological blockers to study the molecular mechanisms regulating the occurrence of adaptation in OSNs. EOG responses to paired-pulses of odorants showed that inhibition of phosphodiesterases (PDEs) and phosphatases enhanced the levels of STA in the olfactory epithelium, and this effect was mimicked by blocking vesicle exocytosis and reduced by blocking cyclic adenosine monophosphate (cAMP)-dependent protein kinase (PKA) and vesicle endocytosis. These results suggest that G-coupled receptors (GPCRs) cycling is involved with the occurrence of STA. To gain insights on the dynamical aspects of this process, we developed a stochastic computational model. The model consists of the olfactory transduction currents mediated by the cyclic nucleotide gated (CNG) channels and calcium ion (Ca2+)-activated chloride (CAC) channels, and the dynamics of their respective ligands, cAMP and Ca2+, and it simulates the EOG results obtained under different experimental conditions through changes in the amplitude and duration of cAMP and Ca2+ response, two second messengers implicated with STA occurrence. The model reproduced the experimental data for each pharmacological treatment and provided a mechanistic explanation for the action of GPCR cycling in the levels of second messengers modulating the levels of STA. All together, these experimental and theoretical results indicate the existence of a mechanism of regulation of STA by signaling pathways that control GPCR cycling and tune the levels of second messengers in OSNs, and not only by CNG channel desensitization as previously thought. PMID:25144232
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Stochastic Parametrisations and Regime Behaviour of Atmospheric Models
NASA Astrophysics Data System (ADS)
Arnold, Hannah; Moroz, Irene; Palmer, Tim
2013-04-01
The presence of regimes is a characteristic of non-linear, chaotic systems (Lorenz, 2006). In the atmosphere, regimes emerge as familiar circulation patterns such as the El-Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and Scandinavian Blocking events. In recent years there has been much interest in the problem of identifying and studying atmospheric regimes (Solomon et al, 2007). In particular, how do these regimes respond to an external forcing such as anthropogenic greenhouse gas emissions? The importance of regimes in observed trends over the past 50-100 years indicates that in order to predict anthropogenic climate change, our climate models must be able to represent accurately natural circulation regimes, their statistics and variability. It is well established that representing model uncertainty as well as initial condition uncertainty is important for reliable weather forecasts (Palmer, 2001). In particular, stochastic parametrisation schemes have been shown to improve the skill of weather forecast models (e.g. Berner et al., 2009; Frenkel et al., 2012; Palmer et al., 2009). It is possible that including stochastic physics as a representation of model uncertainty could also be beneficial in climate modelling, enabling the simulator to explore larger regions of the climate attractor including other flow regimes. An alternative representation of model uncertainty is a perturbed parameter scheme, whereby physical parameters in subgrid parametrisation schemes are perturbed about their optimal value. Perturbing parameters gives a greater control over the ensemble than multi-model or multiparametrisation ensembles, and has been used as a representation of model uncertainty in climate prediction (Stainforth et al., 2005; Rougier et al., 2009). We investigate the effect of including representations of model uncertainty on the regime behaviour of a simulator. A simple chaotic model of the atmosphere, the Lorenz '96 system, is used to study the predictability of regime changes (Lorenz 1996, 2006). Three types of models are considered: a deterministic parametrisation scheme, stochastic parametrisation schemes with additive or multiplicative noise, and a perturbed parameter ensemble. Each forecasting scheme was tested on its ability to reproduce the attractor of the full system, defined in a reduced space based on EOF decomposition. None of the forecast models accurately capture the less common regime, though a significant improvement is observed over the deterministic parametrisation when a temporally correlated stochastic parametrisation is used. The attractor for the perturbed parameter ensemble improves on that forecast by the deterministic or white additive schemes, showing a distinct peak in the attractor corresponding to the less common regime. However, the 40 constituent members of the perturbed parameter ensemble each differ greatly from the true attractor, with many only showing one dominant regime with very rare transitions. These results indicate that perturbed parameter ensembles must be carefully analysed as individual members may have very different characteristics to the ensemble mean and to the true system being modelled. On the other hand, the stochastic parametrisation schemes tested performed well, improving the simulated climate, and motivating the development of a stochastic earth-system simulator for use in climate prediction. J. Berner, G. J. Shutts, M. Leutbecher, and T. N. Palmer. A spectral stochastic kinetic energy backscatter scheme and its impact on flow dependent predictability in the ECMWF ensemble prediction system. J. Atmos. Sci., 66(3):603-626, 2009. Y. Frenkel, A. J. Majda, and B. Khouider. Using the stochastic multicloud model to improve tropical convective parametrisation: A paradigm example. J. Atmos. Sci., 69(3):1080-1105, 2012. E. N. Lorenz. Predictability: a problem partly solved. In Proceedings, Seminar on Predictability, 4-8 September 1995, volume 1, pages 1-18, Shinfield Park, Reading, 1996. ECMWF. E. N. Lorenz. Regimes in simple systems. J. Atmos. Sci., 63(8):2056-2073, 2006. T. N Palmer. A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parametrisation in weather and climate prediction models. Q. J. Roy. Meteor. Soc., 127(572):279-304, 2001. T. N. Palmer, R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer. Stochastic parametrization and model uncertainty. Technical Report 598, European Centre for Medium-Range Weather Forecasts, 2009. J. Rougier, D. M. H. Sexton, J. M. Murphy, and D. Stainforth. Analyzing the climate sensitivity of the HadSM3 climate model using ensembles from different but related experiments. J. Climate, 22:3540-3557, 2009. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, Tignor M., and H. L. Miller. Climate models and their evaluation. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, United Kingdom and New York, NY, USA, 2007. Cambridge University Press. D. A Stainforth, T. Aina, C. Christensen, M. Collins, N. Faull, D. J. Frame, J. A. Kettleborough, S. Knight, A. Martin, J. M. Murphy, C. Piani, D. Sexton, L. A. Smith, R. A Spicer, A. J. Thorpe, and M. R Allen. Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 433(7024):403-406, 2005.
The Stochastic Early Reaction, Inhibition, and late Action (SERIA) model for antisaccades
2017-01-01
The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades. PMID:28767650
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Foddai, Alessandro; Stockmarr, Anders; Boklund, Anette
2016-06-21
The temporal sensitivity of the surveillance system (TemSSe) for Bovine Viral Diarrhea (BVD) in Danish dairy herds was evaluated. Currently, the Danish antibody blocking ELISA is used to test quarterly bulk tank milk (BTM). To optimize the surveillance system as an early warning system, we considered the possibility of using the SVANOVIR ELISA, as this test has been shown to detect BVD-positive herds earlier than the blocking ELISA in BTM tests. Information from data (2010) and outputs from two published stochastic models were fed into a stochastic scenario tree to estimate the TemSSe. For that purpose we considered: the risk of BVD introduction into the dairy population, the ELISA used and the high risk period (HRP) from BVD introduction to testing (at 90 or 365 days). The effect of introducing one persistently infected (PI) calf or one transiently infected (TI) milking cow into 1 (or 8) dairy herd(s) was investigated. Additionally we estimated the confidence in low (PLow) herd prevalence (<8/4109 infected herds) and the confidence in complete freedom (PFree) from BVD (< 1/4109). The TemSSe, the PLow, and the PFree were higher, when tests were performed 365 days after BVD introduction, than after 90 days. Estimates were usually higher for the SVANOVIR than for the blocking ELISA, and when a PI rather than a TI was introduced into the herd(s). For instance, with the current system, the median TemSSe was 64.5 %, 90 days after a PI calf was introduced into eight dairy herds. The related median PLow was 72.5 %. When a PI calf was introduced into one herd the median TemSSe was 12.1 %, while the related PFree was 51.6 %. With the SVANOVIR ELISA these estimates were 99.0 %; 98.9 %, 43.7 % and 62.4 %, respectively. The replacement of the blocking ELISA with the SVANOVIR could increase the TemSSe, the PLow and PFree remarkably. Those results could be used to optimize the Danish BVD surveillance system. Furthermore, the approach proposed in this study, for including the effect of the HRP within the scenario tree methodology, could be applied to optimize early warning surveillance systems of different animal diseases.
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Mohamed I.; Metwalli, Farouk I.; Mesilhi, El Sayed I.
2018-02-01
3D static reservoir modeling of the Bahariya reservoirs using seismic and wells data can be a relevant part of an overall strategy for the oilfields development in South Umbarka area (Western Desert, Egypt). The seismic data is used to build the 3D grid, including fault sticks for the fault modeling, and horizon interpretations and surfaces for horizon modeling. The 3D grid is the digital representation of the structural geology of Bahariya Formation. When we got a reasonably accurate representation, we fill the 3D grid with facies and petrophysical properties to simulate it, to gain a more precise understanding of the reservoir properties behavior. Sequential Indicator Simulation (SIS) and Sequential Gaussian Simulation (SGS) techniques are the stochastic algorithms used to spatially distribute discrete reservoir properties (facies) and continuous reservoir properties (shale volume, porosity, and water saturation) respectively within the created 3D grid throughout property modeling. The structural model of Bahariya Formation exhibits the trapping mechanism which is a fault assisted anticlinal closure trending NW-SE. This major fault breaks the reservoirs into two major fault blocks (North Block and South Block). Petrophysical models classified Lower Bahariya reservoir as a moderate to good reservoir rather than Upper Bahariya reservoir in terms of facies, with good porosity and permeability, low water saturation, and moderate net to gross. The Original Oil In Place (OOIP) values of modeled Bahariya reservoirs show hydrocarbon accumulation in economic quantity, considering the high structural dips at the central part of South Umbarka area. The powerful of 3D static modeling technique has provided a considerable insight into the future prediction of Bahariya reservoirs performance and production behavior.
Stochastic Lanchester Air-to-Air Campaign Model: Model Description and Users Guides
2009-01-01
STOCHASTIC LANCHESTER AIR-TO-AIR CAMPAIGN MODEL MODEL DESCRIPTION AND USERS GUIDES—2009 REPORT PA702T1 Rober t V. Hemm Jr. Dav id A . Lee...LMI © 2009. ALL RIGHTS RESERVED. Stochastic Lanchester Air-to-Air Campaign Model: Model Description and Users Guides—2009 PA702T1/JANUARY...2009 Executive Summary This report documents the latest version of the Stochastic Lanchester Air-to-Air Campaign Model (SLAACM), developed by LMI for
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
An early warning indicator for atmospheric blocking events using transfer operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tantet, Alexis, E-mail: a.j.j.tantet@uu.nl; Burgt, Fiona R. van der; Dijkstra, Henk A.
The existence of persistent midlatitude atmospheric flow regimes with time-scales larger than 5–10 days and indications of preferred transitions between them motivates to develop early warning indicators for such regime transitions. In this paper, we use a hemispheric barotropic model together with estimates of transfer operators on a reduced phase space to develop an early warning indicator of the zonal to blocked flow transition in this model. It is shown that the spectrum of the transfer operators can be used to study the slow dynamics of the flow as well as the non-Markovian character of the reduction. The slowest motionsmore » are thereby found to have time scales of three to six weeks and to be associated with meta-stable regimes (and their transitions) which can be detected as almost-invariant sets of the transfer operator. From the energy budget of the model, we are able to explain the meta-stability of the regimes and the existence of preferred transition paths. Even though the model is highly simplified, the skill of the early warning indicator is promising, suggesting that the transfer operator approach can be used in parallel to an operational deterministic model for stochastic prediction or to assess forecast uncertainty.« less
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Stochastic modelling of microstructure formation in solidification processes
NASA Astrophysics Data System (ADS)
Nastac, Laurentiu; Stefanescu, Doru M.
1997-07-01
To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Stochastic effects in a seasonally forced epidemic model
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.
2010-10-01
The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.
NASA Astrophysics Data System (ADS)
Yugsi Molina, F. X.; Oppikofer, T.; Fischer, L.; Hermanns, R. L.; Taurisano, A.
2012-04-01
Traditional techniques to assess rockfall hazard are partially based on probabilistic analysis. Stochastic methods has been used for run-out analysis of rock blocks to estimate the trajectories that a detached block will follow during its fall until it stops due to kinetic energy loss. However, the selection of rockfall source areas is usually defined either by multivariate analysis or by field observations. For either case, a physically based approach is not used for the source area detection. We present an example of rockfall hazard assessment that integrates a probabilistic rockfall run-out analysis with a stochastic assessment of the rockfall source areas using kinematic stability analysis in a GIS environment. The method has been tested for a steep more than 200 m high rock wall, located in the municipality of Norddal (Møre og Romsdal county, Norway), where a large number of people are either exposed to snow avalanches, rockfalls, or debris flows. The area was selected following the recently published hazard mapping plan of Norway. The cliff is formed by medium to coarse-grained quartz-dioritic to granitic gneisses of Proterozoic age. Scree deposits product of recent rockfall activity are found at the bottom of the rock wall. Large blocks can be found several tens of meters away from the cliff in Sylte, the main locality in the Norddal municipality. Structural characterization of the rock wall was done using terrestrial laser scanning (TLS) point clouds in the software Coltop3D (www.terranum.ch), and results were validated with field data. Orientation data sets from the structural characterization were analyzed separately to assess best-fit probability density functions (PDF) for both dip angle and dip direction angle of each discontinuity set. A GIS-based stochastic kinematic analysis was then carried out using the discontinuity set orientations and the friction angle as random variables. An airborne laser scanning digital elevation model (ALS-DEM) with 1 m resolution was used for the analysis. Three failure mechanisms were analyzed: planar and wedge sliding, as well as toppling. Based on this kinematic analysis, areas where failure is feasible were used as source areas for run out analysis using Rockyfor3D v. 4.1 (www.ecorisq.org). The software calculates trajectories of single falling blocks in three dimensions using physically based algorithms developed under a stochastic approach. The ALS-DEM was down-scaled to 5 m resolution to optimize processing time. Results were compared with run-out simulations using Rockyfor3D with the whole rock wall as source area, and with maps of deposits generated from field observations and aerial photo interpretation. The results product of our implementation show a better correlation with field observations, and help to produce more accurate rock fall hazard assessment maps by a better definition of the source areas. It reduces the time processing for the analysis as well. The findings presented in this contribution are part of an effort to produce guidelines for natural hazard mapping in Norway. Guidelines will be used in upcoming years for hazard mapping in areas where larger groups of population are exposed to mass movements from steep slopes.
Techno-economic analysis of supercritical carbon dioxide power blocks
NASA Astrophysics Data System (ADS)
Meybodi, Mehdi Aghaei; Beath, Andrew; Gwynn-Jones, Stephen; Veeraragavan, Anand; Gurgenci, Hal; Hooman, Kamel
2017-06-01
Developing highly efficient power blocks holds the key to enhancing the cost competitiveness of Concentration Solar Thermal (CST) technologies. Supercritical CO2 (sCO2) Brayton cycles have proved promising in providing equivalent or higher cycle efficiency than supercritical or superheated steam cycles at temperatures and scales relevant for Australian CST applications. In this study, a techno-economic methodology is developed using a stochastic approach to determine the ranges for the cost and performance of different components of central receiver power plants utilizing sCO2 power blocks that are necessary to meet the Australian Solar Thermal Initiative (ASTRI) final LCOE target of 12 c/kWh.
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
NASA Astrophysics Data System (ADS)
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
Liu, Meng; Wang, Ke
2010-12-07
This is a continuation of our paper [Liu, M., Wang, K., 2010. Persistence and extinction of a stochastic single-species model under regime switching in a polluted environment, J. Theor. Biol. 264, 934-944]. Taking both white noise and colored noise into account, a stochastic single-species model under regime switching in a polluted environment is studied. Sufficient conditions for extinction, stochastic nonpersistence in the mean, stochastic weak persistence and stochastic permanence are established. The threshold between stochastic weak persistence and extinction is obtained. The results show that a different type of noise has a different effect on the survival results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less
NASA Astrophysics Data System (ADS)
Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina
2017-09-01
A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed
2017-03-01
In this paper, we develop a mathematical model for a tuberculosis model with constant recruitment and varying total population size by incorporating stochastic perturbations. By constructing suitable stochastic Lyapunov functions, we establish sufficient conditions for the existence of an ergodic stationary distribution as well as extinction of the disease to the stochastic system.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Gompertzian stochastic model with delay effect to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process.
Jahn, Patrick; Berg, Rune W; Hounsgaard, Jørn; Ditlevsen, Susanne
2011-11-01
Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein-Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein-Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.
Effects of stochastic sodium channels on extracellular excitation of myelinated nerve fibers.
Mino, Hiroyuki; Grill, Warren M
2002-06-01
The effects of the stochastic gating properties of sodium channels on the extracellular excitation properties of mammalian nerve fibers was determined by computer simulation. To reduce computation time, a hybrid multicompartment cable model including five central nodes of Ranvier containing stochastic sodium channels and 16 flanking nodes containing detenninistic membrane dynamics was developed. The excitation properties of the hybrid cable model were comparable with those of a full stochastic cable model including 21 nodes of Ranvier containing stochastic sodium channels, indicating the validity of the hybrid cable model. The hybrid cable model was used to investigate whether or not the excitation properties of extracellularly activated fibers were influenced by the stochastic gating of sodium channels, including spike latencies, strength-duration (SD), current-distance (IX), and recruitment properties. The stochastic properties of the sodium channels in the hybrid cable model had the greatest impact when considering the temporal dynamics of nerve fibers, i.e., a large variability in latencies, while they did not influence the SD, IX, or recruitment properties as compared with those of the conventional deterministic cable model. These findings suggest that inclusion of stochastic nodes is not important for model-based design of stimulus waveforms for activation of motor nerve fibers. However, in cases where temporal fine structure is important, for example in sensory neural prostheses in the auditory and visual systems, the stochastic properties of the sodium channels may play a key role in the design of stimulus waveforms.
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Analysis of a novel stochastic SIRS epidemic model with two different saturated incidence rates
NASA Astrophysics Data System (ADS)
Chang, Zhengbo; Meng, Xinzhu; Lu, Xiao
2017-04-01
This paper presents a stochastic SIRS epidemic model with two different nonlinear incidence rates and double epidemic asymmetrical hypothesis, and we devote to develop a mathematical method to obtain the threshold of the stochastic epidemic model. We firstly investigate the boundness and extinction of the stochastic system. Furthermore, we use Ito's formula, the comparison theorem and some new inequalities techniques of stochastic differential systems to discuss persistence in mean of two diseases on three cases. The results indicate that stochastic fluctuations can suppress the disease outbreak. Finally, numerical simulations about different noise disturbance coefficients are carried out to illustrate the obtained theoretical results.
Sparse cliques trump scale-free networks in coordination and competition
Gianetto, David A.; Heydari, Babak
2016-01-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game. PMID:26899456
Sparse cliques trump scale-free networks in coordination and competition
NASA Astrophysics Data System (ADS)
Gianetto, David A.; Heydari, Babak
2016-02-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game.
Gene selection heuristic algorithm for nutrigenomics studies.
Valour, D; Hue, I; Grimard, B; Valour, B
2013-07-15
Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.
Probabilistic switching circuits in DNA
Wilhelm, Daniel; Bruck, Jehoshua
2018-01-01
A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block. PMID:29339484
Stochastic Human Exposure and Dose Simulation Model for Pesticides
SHEDS-Pesticides (Stochastic Human Exposure and Dose Simulation Model for Pesticides) is a physically-based stochastic model developed to quantify exposure and dose of humans to multimedia, multipathway pollutants. Probabilistic inputs are combined in physical/mechanistic algorit...
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Dynamics of a stochastic multi-strain SIS epidemic model driven by Lévy noise
NASA Astrophysics Data System (ADS)
Chen, Can; Kang, Yanmei
2017-01-01
A stochastic multi-strain SIS epidemic model is formulated by introducing Lévy noise into the disease transmission rate of each strain. First, we prove that the stochastic model admits a unique global positive solution, and, by the comparison theorem, we show that the solution remains within a positively invariant set almost surely. Next we investigate stochastic stability of the disease-free equilibrium, including stability in probability and pth moment asymptotic stability. Then sufficient conditions for persistence in the mean of the disease are established. Finally, based on an Euler scheme for Lévy-driven stochastic differential equations, numerical simulations for a stochastic two-strain model are carried out to verify the theoretical results. Moreover, numerical comparison results of the stochastic two-strain model and the deterministic version are also given. Lévy noise can cause the two strains to become extinct almost surely, even though there is a dominant strain that persists in the deterministic model. It can be concluded that the introduction of Lévy noise reduces the disease extinction threshold, which indicates that Lévy noise may suppress the disease outbreak.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less
Stochastic dynamics of melt ponds and sea ice-albedo climate feedback
NASA Astrophysics Data System (ADS)
Sudakov, Ivan
Evolution of melt ponds on the Arctic sea surface is a complicated stochastic process. We suggest a low-order model with ice-albedo feedback which describes stochastic dynamics of melt ponds geometrical characteristics. The model is a stochastic dynamical system model of energy balance in the climate system. We describe the equilibria in this model. We conclude the transition in fractal dimension of melt ponds affects the shape of the sea ice albedo curve.
Effects of Stochastic Traffic Flow Model on Expected System Performance
2012-12-01
NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Karbasi, Amin; Crawford, Forrest W.
2017-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924
Estimating the Size of a Large Network and its Communities from a Random Sample.
Chen, Lin; Karbasi, Amin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering
NASA Astrophysics Data System (ADS)
Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki
2016-11-01
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
NASA Astrophysics Data System (ADS)
Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth
2016-05-01
A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
Study on individual stochastic model of GNSS observations for precise kinematic applications
NASA Astrophysics Data System (ADS)
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
NASA Astrophysics Data System (ADS)
Ferwerda, Cameron; Lipan, Ovidiu
2016-11-01
Akin to electric circuits, we construct biocircuits that are manipulated by cutting and assembling channels through which stochastic information flows. This diagrammatic manipulation allows us to create a method which constructs networks by joining building blocks selected so that (a) they cover only basic processes; (b) it is scalable to large networks; (c) the mean and variance-covariance from the Pauli master equation form a closed system; and (d) given the initial probability distribution, no special boundary conditions are necessary to solve the master equation. The method aims to help with both designing new synthetic signaling pathways and quantifying naturally existing regulatory networks.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Some Stochastic-Duel Models of Combat.
1983-03-01
AD-R127 879 SOME STOCHASTIC- DUEL MODELS OF CONBAT(U) NAVAL - / POSTGRADUATE SCHOOL MONTEREY CA J S CHOE MAR 83 UNCLASSiIED FC1/Ehhh1; F/ 12/ ,iE...SCHOOL Monterey, California DTIC ELECTE :MAY 10 1983 "T !H ES IS SOME STOCHASTIC- DUEL MODELS OF COMBAT by Jum Soo Choe March 1983 Thesis Advisor: J. G...TYPE OF RETORT a PERIOD COVIOCe Master’s Thesis Some Stochastic- Duel Models of Combat March 1983 S. PERFORINGi *no. 44POOi umet 7. AUTHORW.) a
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-01-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-06-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Dung Tuan Nguyen
2012-01-01
Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...
A spatial stochastic programming model for timber and core area management under risk of fires
Yu Wei; Michael Bevers; Dung Nguyen; Erin Belval
2014-01-01
Previous stochastic models in harvest scheduling seldom address explicit spatial management concerns under the influence of natural disturbances. We employ multistage stochastic programming models to explore the challenges and advantages of building spatial optimization models that account for the influences of random stand-replacing fires. Our exploratory test models...
Charting the Replica Symmetric Phase
NASA Astrophysics Data System (ADS)
Coja-Oghlan, Amin; Efthymiou, Charilaos; Jaafari, Nor; Kang, Mihyun; Kapetanopoulos, Tobias
2018-02-01
Diluted mean-field models are spin systems whose geometry of interactions is induced by a sparse random graph or hypergraph. Such models play an eminent role in the statistical mechanics of disordered systems as well as in combinatorics and computer science. In a path-breaking paper based on the non-rigorous `cavity method', physicists predicted not only the existence of a replica symmetry breaking phase transition in such models but also sketched a detailed picture of the evolution of the Gibbs measure within the replica symmetric phase and its impact on important problems in combinatorics, computer science and physics (Krzakala et al. in Proc Natl Acad Sci 104:10318-10323, 2007). In this paper we rigorise this picture completely for a broad class of models, encompassing the Potts antiferromagnet on the random graph, the k-XORSAT model and the diluted k-spin model for even k. We also prove a conjecture about the detection problem in the stochastic block model that has received considerable attention (Decelle et al. in Phys Rev E 84:066106, 2011).
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083
Liu, Meng; Wang, Ke
2010-06-07
A new single-species model disturbed by both white noise and colored noise in a polluted environment is developed and analyzed. Sufficient criteria for extinction, stochastic nonpersistence in the mean, stochastic weak persistence in the mean, stochastic strong persistence in the mean and stochastic permanence of the species are established. The threshold between stochastic weak persistence in the mean and extinction is obtained. The results show that both white and colored environmental noises have sufficient effect to the survival results. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analysis of novel stochastic switched SILI epidemic models with continuous and impulsive control
NASA Astrophysics Data System (ADS)
Gao, Shujing; Zhong, Deming; Zhang, Yan
2018-04-01
In this paper, we establish two new stochastic switched epidemic models with continuous and impulsive control. The stochastic perturbations are considered for the natural death rate in each equation of the models. Firstly, a stochastic switched SILI model with continuous control schemes is investigated. By using Lyapunov-Razumikhin method, the sufficient conditions for extinction in mean are established. Our result shows that the disease could be die out theoretically if threshold value R is less than one, regardless of whether the disease-free solutions of the corresponding subsystems are stable or unstable. Then, a stochastic switched SILI model with continuous control schemes and pulse vaccination is studied. The threshold value R is derived. The global attractivity of the model is also obtained. At last, numerical simulations are carried out to support our results.
Stochastic and deterministic models for agricultural production networks.
Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D
2007-07-01
An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
One-Week Module on Stochastic Groundwater Modeling
ERIC Educational Resources Information Center
Mays, David C.
2010-01-01
This article describes a one-week introduction to stochastic groundwater modeling, intended for the end of a first course on groundwater hydrology, or the beginning of a second course on stochastic hydrogeology or groundwater modeling. The motivation for this work is to strengthen groundwater education, which has been identified among the factors…
A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.
Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S
2017-09-01
We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
Stochastic growth logistic model with aftereffect for batch fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah
2014-06-19
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
NASA Astrophysics Data System (ADS)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-06-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity
NASA Astrophysics Data System (ADS)
Thual, S.; Majda, A.; Chen, N.
2017-12-01
The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.
Deterministic and stochastic CTMC models from Zika disease transmission
NASA Astrophysics Data System (ADS)
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Hybrid approaches for multiple-species stochastic reaction-diffusion models
NASA Astrophysics Data System (ADS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen
2015-10-15
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Sato, Tatsuhiko; Furusawa, Yoshiya
2012-10-01
Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
A Control Simulation Method of High-Speed Trains on Railway Network with Irregular Influence
NASA Astrophysics Data System (ADS)
Yang, Li-Xing; Li, Xiang; Li, Ke-Ping
2011-09-01
Based on the discrete time method, an effective movement control model is designed for a group of highspeed trains on a rail network. The purpose of the model is to investigate the specific traffic characteristics of high-speed trains under the interruption of stochastic irregular events. In the model, the high-speed rail traffic system is supposed to be equipped with the moving-block signalling system to guarantee maximum traversing capacity of the railway. To keep the safety of trains' movements, some operational strategies are proposed to control the movements of trains in the model, including traction operation, braking operation, and entering-station operation. The numerical simulations show that the designed model can well describe the movements of high-speed trains on the rail network. The research results can provide the useful information not only for investigating the propagation features of relevant delays under the irregular disturbance but also for rerouting and rescheduling trains on the rail network.
Tests of oceanic stochastic parameterisation in a seasonal forecast system.
NASA Astrophysics Data System (ADS)
Cooper, Fenwick; Andrejczuk, Miroslaw; Juricke, Stephan; Zanna, Laure; Palmer, Tim
2015-04-01
Over seasonal time scales, our aim is to compare the relative impact of ocean initial condition and model uncertainty, upon the ocean forecast skill and reliability. Over seasonal timescales we compare four oceanic stochastic parameterisation schemes applied in a 1x1 degree ocean model (NEMO) with a fully coupled T159 atmosphere (ECMWF IFS). The relative impacts upon the ocean of the resulting eddy induced activity, wind forcing and typical initial condition perturbations are quantified. Following the historical success of stochastic parameterisation in the atmosphere, two of the parameterisations tested were multiplicitave in nature: A stochastic variation of the Gent-McWilliams scheme and a stochastic diffusion scheme. We also consider a surface flux parameterisation (similar to that introduced by Williams, 2012), and stochastic perturbation of the equation of state (similar to that introduced by Brankart, 2013). The amplitude of the stochastic term in the Williams (2012) scheme was set to the physically reasonable amplitude considered in that paper. The amplitude of the stochastic term in each of the other schemes was increased to the limits of model stability. As expected, variability was increased. Up to 1 month after initialisation, ensemble spread induced by stochastic parameterisation is greater than that induced by the atmosphere, whilst being smaller than the initial condition perturbations currently used at ECMWF. After 1 month, the wind forcing becomes the dominant source of model ocean variability, even at depth.
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Stochasticity in staged models of epidemics: quantifying the dynamics of whooping cough
Black, Andrew J.; McKane, Alan J.
2010-01-01
Although many stochastic models can accurately capture the qualitative epidemic patterns of many childhood diseases, there is still considerable discussion concerning the basic mechanisms generating these patterns; much of this stems from the use of deterministic models to try to understand stochastic simulations. We argue that a systematic method of analysing models of the spread of childhood diseases is required in order to consistently separate out the effects of demographic stochasticity, external forcing and modelling choices. Such a technique is provided by formulating the models as master equations and using the van Kampen system-size expansion to provide analytical expressions for quantities of interest. We apply this method to the susceptible–exposed–infected–recovered (SEIR) model with distributed exposed and infectious periods and calculate the form that stochastic oscillations take on in terms of the model parameters. With the use of a suitable approximation, we apply the formalism to analyse a model of whooping cough which includes seasonal forcing. This allows us to more accurately interpret the results of simulations and to make a more quantitative assessment of the predictions of the model. We show that the observed dynamics are a result of a macroscopic limit cycle induced by the external forcing and resonant stochastic oscillations about this cycle. PMID:20164086
DRIHM Project: Floods in Serbia in May 2014
NASA Astrophysics Data System (ADS)
Ivkovic, Marija; Dimitrijevic, Vladimir; Dekic, Ljiljana; Mihalovic, Ana; Pejanovic, Goran
2015-04-01
The central parts of Balkans were affected with very deep cyclone named "Tamara" form 13th until 16th of May. Stations in western parts of Serbia recorded precipitation four times greater than average precipitation sums. Two third of that amount has felt in three days. Devastating floods occurred on Sava, Kolubara and Jadar river basins causing damage of 1.7 billion Euros, and loss of 24 human lives. Three days before the event, a first warning was issued pointing that the precipitation amounts will exceed 40 mm of rain for 12 hours, accompanied with the hydrological information that the water level on Sava and Kolubara rivers will significantly rise. Within the DRIHM project and its e-infrastructure it was possible to test a combination of different Numerical Weather Prediction models together with stochastic downscaling algorithms to enable the production of more effective quantitative rainfall predictions for this severe meteorological event. Hydrometeorological models in DRIHM are building blocks that can be easily linked together in a form of hydrometeorological chain. For this case the HBV model, distributed hydrological model, was used as the hydrological component in the model chain and RainFARM as stochastic downscaling tool. Results obtained with these models are shown and compared with Hyprom, one of the hydrological models also used in RHMSS with the aim of scoping the current capabilities for the early warning of the extreme events. The information where and when the High Impact Weather Event (HIWE) can occur is very important for the proper overview of the possible overall influence. Different precipitation distribution both in space and in time is allowing us to estimate the future state of the system but also to see the range of the possible outcomes.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
Machine learning from computer simulations with applications in rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Taheri, Mehdi; Ahmadian, Mehdi
2016-05-01
The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.
Stochastic von Bertalanffy models, with applications to fish recruitment.
Lv, Qiming; Pitchford, Jonathan W
2007-02-21
We consider three individual-based models describing growth in stochastic environments. Stochastic differential equations (SDEs) with identical von Bertalanffy deterministic parts are formulated, with a stochastic term which decreases, remains constant, or increases with organism size, respectively. Probability density functions for hitting times are evaluated in the context of fish growth and mortality. Solving the hitting time problem analytically or numerically shows that stochasticity can have a large positive impact on fish recruitment probability. It is also demonstrated that the observed mean growth rate of surviving individuals always exceeds the mean population growth rate, which itself exceeds the growth rate of the equivalent deterministic model. The consequences of these results in more general biological situations are discussed.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
A stochastic SIS epidemic model with vaccination
NASA Astrophysics Data System (ADS)
Cao, Boqiang; Shan, Meijing; Zhang, Qimin; Wang, Weiming
2017-11-01
In this paper, we investigate the basic features of an SIS type infectious disease model with varying population size and vaccinations in presence of environment noise. By applying the Markov semigroup theory, we propose a stochastic reproduction number R0s which can be seen as a threshold parameter to utilize in identifying the stochastic extinction and persistence: If R0s < 1, under some mild extra conditions, there exists a disease-free absorbing set for the stochastic epidemic model, which implies that disease dies out with probability one; while if R0s > 1, under some mild extra conditions, the SDE model has an endemic stationary distribution which results in the stochastic persistence of the infectious disease. The most interesting finding is that large environmental noise can suppress the outbreak of the disease.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2018-01-01
A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.
Stochastic volatility of the futures prices of emission allowances: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2017-01-01
Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.
Optimal Control Inventory Stochastic With Production Deteriorating
NASA Astrophysics Data System (ADS)
Affandi, Pardi
2018-01-01
In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.
Stochastic Game Analysis and Latency Awareness for Self-Adaptation
2014-01-01
this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to
Guo, Shicheng; Diep, Dinh; Plongthongkum, Nongluk; Fung, Ho-Lim; Zhang, Kang; Zhang, Kun
2017-04-01
Adjacent CpG sites in mammalian genomes can be co-methylated owing to the processivity of methyltransferases or demethylases, yet discordant methylation patterns have also been observed, which are related to stochastic or uncoordinated molecular processes. We focused on a systematic search and investigation of regions in the full human genome that show highly coordinated methylation. We defined 147,888 blocks of tightly coupled CpG sites, called methylation haplotype blocks, after analysis of 61 whole-genome bisulfite sequencing data sets and validation with 101 reduced-representation bisulfite sequencing data sets and 637 methylation array data sets. Using a metric called methylation haplotype load, we performed tissue-specific methylation analysis at the block level. Subsets of informative blocks were further identified for deconvolution of heterogeneous samples. Finally, using methylation haplotypes we demonstrated quantitative estimation of tumor load and tissue-of-origin mapping in the circulating cell-free DNA of 59 patients with lung or colorectal cancer.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Stochastic analysis of a novel nonautonomous periodic SIRI epidemic system with random disturbances
NASA Astrophysics Data System (ADS)
Zhang, Weiwei; Meng, Xinzhu
2018-02-01
In this paper, a new stochastic nonautonomous SIRI epidemic model is formulated. Given that the incidence rates of diseases may change with the environment, we propose a novel type of transmission function. The main aim of this paper is to obtain the thresholds of the stochastic SIRI epidemic model. To this end, we investigate the dynamics of the stochastic system and establish the conditions for extinction and persistence in mean of the disease by constructing some suitable Lyapunov functions and using stochastic analysis technique. Furthermore, we show that the stochastic system has at least one nontrivial positive periodic solution. Finally, numerical simulations are introduced to illustrate our results.
Stochastic dynamic modeling of regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
Modeling the lake eutrophication stochastic ecosystem and the research of its stability.
Wang, Bo; Qi, Qianqian
2018-06-01
In the reality, the lake system will be disturbed by stochastic factors including the external and internal factors. By adding the additive noise and the multiplicative noise to the right-hand sides of the model equation, the additive stochastic model and the multiplicative stochastic model are established respectively in order to reduce model errors induced by the absence of some physical processes. For both the two kinds of stochastic ecosystems, the authors studied the bifurcation characteristics with the FPK equation and the Lyapunov exponent method based on the Stratonovich-Khasminiskii stochastic average principle. Results show that, for the additive stochastic model, when control parameter (i.e., nutrient loading rate) falls into the interval [0.388644, 0.66003825], there exists bistability for the ecosystem and the additive noise intensities cannot make the bifurcation point drift. In the region of the bistability, the external stochastic disturbance which is one of the main triggers causing the lake eutrophication, may make the ecosystem unstable and induce a transition. When control parameter (nutrient loading rate) falls into the interval (0, 0.388644) and (0.66003825, 1.0), there only exists a stable equilibrium state and the additive noise intensity could not change it. For the multiplicative stochastic model, there exists more complex bifurcation performance and the multiplicative ecosystem will be broken by the multiplicative noise. Also, the multiplicative noise could reduce the extent of the bistable region, ultimately, the bistable region vanishes for sufficiently large noise. What's more, both the nutrient loading rate and the multiplicative noise will make the ecosystem have a regime shift. On the other hand, for the two kinds of stochastic ecosystems, the authors also discussed the evolution of the ecological variable in detail by using the Four-stage Runge-Kutta method of strong order γ=1.5. The numerical method was found to be capable of effectively explaining the regime shift theory and agreed with the realistic analyze. These conclusions also confirms the two paths for the system to move from one stable state to another proposed by Beisner et al. [3], which may help understand the occurrence mechanism related to the lake eutrophication from the view point of the stochastic model and mathematical analysis. Copyright © 2018 Elsevier Inc. All rights reserved.
Importance of vesicle release stochasticity in neuro-spike communication.
Ramezani, Hamideh; Akan, Ozgur B
2017-07-01
Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Stochastic Modeling of Laminar-Turbulent Transition
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Choudhari, Meelan
2002-01-01
Stochastic versions of stability equations are developed in order to develop integrated models of transition and turbulence and to understand the effects of uncertain initial conditions on disturbance growth. Stochastic forms of the resonant triad equations, a high Reynolds number asymptotic theory, and the parabolized stability equations are developed.
Stochastic modeling of consumer preferences for health care institutions.
Malhotra, N K
1983-01-01
This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
The threshold of a stochastic avian-human influenza epidemic model with psychological effect
NASA Astrophysics Data System (ADS)
Zhang, Fengrong; Zhang, Xinhong
2018-02-01
In this paper, a stochastic avian-human influenza epidemic model with psychological effect in human population and saturation effect within avian population is investigated. This model describes the transmission of avian influenza among avian population and human population in random environments. For stochastic avian-only system, persistence in the mean and extinction of the infected avian population are studied. For the avian-human influenza epidemic system, sufficient conditions for the existence of an ergodic stationary distribution are obtained. Furthermore, a threshold of this stochastic model which determines the outcome of the disease is obtained. Finally, numerical simulations are given to support the theoretical results.
Coevolution Maintains Diversity in the Stochastic "Kill the Winner" Model
NASA Astrophysics Data System (ADS)
Xue, Chi; Goldenfeld, Nigel
2017-12-01
The "kill the winner" hypothesis is an attempt to address the problem of diversity in biology. It argues that host-specific predators control the population of each prey, preventing a winner from emerging and thus maintaining the coexistence of all species in the system. We develop a stochastic model for the kill the winner paradigm and show that the stable coexistence state of the deterministic kill the winner model is destroyed by demographic stochasticity, through a cascade of extinction events. We formulate an individual-level stochastic model in which predator-prey coevolution promotes the high diversity of the ecosystem by generating a persistent population flux of species.
Stochastic mixed-mode oscillations in a three-species predator-prey model
NASA Astrophysics Data System (ADS)
Sadhu, Susmita; Kuehn, Christian
2018-03-01
The effect of demographic stochasticity, in the form of Gaussian white noise, in a predator-prey model with one fast and two slow variables is studied. We derive the stochastic differential equations (SDEs) from a discrete model. For suitable parameter values, the deterministic drift part of the model admits a folded node singularity and exhibits a singular Hopf bifurcation. We focus on the parameter regime near the Hopf bifurcation, where small amplitude oscillations exist as stable dynamics in the absence of noise. In this regime, the stochastic model admits noise-driven mixed-mode oscillations (MMOs), which capture the intermediate dynamics between two cycles of population outbreaks. We perform numerical simulations to calculate the distribution of the random number of small oscillations between successive spikes for varying noise intensities and distance to the Hopf bifurcation. We also study the effect of noise on a suitable Poincaré map. Finally, we prove that the stochastic model can be transformed into a normal form near the folded node, which can be linked to recent results on the interplay between deterministic and stochastic small amplitude oscillations. The normal form can also be used to study the parameter influence on the noise level near folded singularities.
Tsunamis: stochastic models of occurrence and generation mechanisms
Geist, Eric L.; Oglesby, David D.
2014-01-01
The devastating consequences of the 2004 Indian Ocean and 2011 Japan tsunamis have led to increased research into many different aspects of the tsunami phenomenon. In this entry, we review research related to the observed complexity and uncertainty associated with tsunami generation, propagation, and occurrence described and analyzed using a variety of stochastic methods. In each case, seismogenic tsunamis are primarily considered. Stochastic models are developed from the physical theories that govern tsunami evolution combined with empirical models fitted to seismic and tsunami observations, as well as tsunami catalogs. These stochastic methods are key to providing probabilistic forecasts and hazard assessments for tsunamis. The stochastic methods described here are similar to those described for earthquakes (Vere-Jones 2013) and volcanoes (Bebbington 2013) in this encyclopedia.
Complex architecture of primes and natural numbers.
García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián
2014-08-01
Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.
Comparative analysis on the selection of number of clusters in community detection
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2018-02-01
We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.
NASA Astrophysics Data System (ADS)
Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.
2016-12-01
It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.
Surface plasmon enhanced cell microscopy with blocked random spatial activation
NASA Astrophysics Data System (ADS)
Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun
2016-03-01
We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.
The ISI distribution of the stochastic Hodgkin-Huxley neuron.
Rowat, Peter F; Greenwood, Priscilla E
2014-01-01
The simulation of ion-channel noise has an important role in computational neuroscience. In recent years several approximate methods of carrying out this simulation have been published, based on stochastic differential equations, and all giving slightly different results. The obvious, and essential, question is: which method is the most accurate and which is most computationally efficient? Here we make a contribution to the answer. We compare interspike interval histograms from simulated data using four different approximate stochastic differential equation (SDE) models of the stochastic Hodgkin-Huxley neuron, as well as the exact Markov chain model simulated by the Gillespie algorithm. One of the recent SDE models is the same as the Kurtz approximation first published in 1978. All the models considered give similar ISI histograms over a wide range of deterministic and stochastic input. Three features of these histograms are an initial peak, followed by one or more bumps, and then an exponential tail. We explore how these features depend on deterministic input and on level of channel noise, and explain the results using the stochastic dynamics of the model. We conclude with a rough ranking of the four SDE models with respect to the similarity of their ISI histograms to the histogram of the exact Markov chain model.
A stochastic visco-hyperelastic model of human placenta tissue for finite element crash simulations.
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Rupp, Jonathan D; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W
2011-03-01
Placental abruption is the most common cause of fetal deaths in motor-vehicle crashes, but studies on the mechanical properties of human placenta are rare. This study presents a new method of developing a stochastic visco-hyperelastic material model of human placenta tissue using a combination of uniaxial tensile testing, specimen-specific finite element (FE) modeling, and stochastic optimization techniques. In our previous study, uniaxial tensile tests of 21 placenta specimens have been performed using a strain rate of 12/s. In this study, additional uniaxial tensile tests were performed using strain rates of 1/s and 0.1/s on 25 placenta specimens. Response corridors for the three loading rates were developed based on the normalized data achieved by test reconstructions of each specimen using specimen-specific FE models. Material parameters of a visco-hyperelastic model and their associated standard deviations were tuned to match both the means and standard deviations of all three response corridors using a stochastic optimization method. The results show a very good agreement between the tested and simulated response corridors, indicating that stochastic analysis can improve estimation of variability in material model parameters. The proposed method can be applied to develop stochastic material models of other biological soft tissues.
Weak Galilean invariance as a selection principle for coarse-grained diffusive models.
Cairoli, Andrea; Klages, Rainer; Baule, Adrian
2018-05-29
How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.
Dynamics of a stochastic HIV-1 infection model with logistic growth
NASA Astrophysics Data System (ADS)
Jiang, Daqing; Liu, Qun; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed; Xia, Peiyan
2017-03-01
This paper is concerned with a stochastic HIV-1 infection model with logistic growth. Firstly, by constructing suitable stochastic Lyapunov functions, we establish sufficient conditions for the existence of ergodic stationary distribution of the solution to the HIV-1 infection model. Then we obtain sufficient conditions for extinction of the infection. The stationary distribution shows that the infection can become persistent in vivo.
2013-11-01
STOCHASTIC RADIATIVE TRANSFER MODEL FOR CONTAMINATED ROUGH SURFACES: A...of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid ...COVERED (From - To) Jan 2013 - Sep 2013 4. TITLE AND SUBTITLE Stochastic Radiative Transfer Model for Contaminated Rough Surfaces: A Framework for
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.
2016-12-01
One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.
Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble
NASA Astrophysics Data System (ADS)
Jankov, I.
2017-12-01
It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using all three stochastic approaches to address model uncertainty. Results from the stochastic perturbation testing were compared to a baseline multi-physics control ensemble. For probabilistic forecast performance the Model Evaluation Tools (MET) verification package was used.
Partial ASL extensions for stochastic programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, David
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V
2013-04-01
Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.
NASA Astrophysics Data System (ADS)
Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.
2013-04-01
Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2017-08-01
This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.
Variational formulation for Black-Scholes equations in stochastic volatility models
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Valkov, Radoslav L.
2012-11-01
In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
A developmental basis for stochasticity in floral organ numbers
Kitazawa, Miho S.; Fujimoto, Koichi
2014-01-01
Stochasticity ubiquitously inevitably appears at all levels from molecular traits to multicellular, morphological traits. Intrinsic stochasticity in biochemical reactions underlies the typical intercellular distributions of chemical concentrations, e.g., morphogen gradients, which can give rise to stochastic morphogenesis. While the universal statistics and mechanisms underlying the stochasticity at the biochemical level have been widely analyzed, those at the morphological level have not. Such morphological stochasticity is found in foral organ numbers. Although the floral organ number is a hallmark of floral species, it can distribute stochastically even within an individual plant. The probability distribution of the floral organ number within a population is usually asymmetric, i.e., it is more likely to increase rather than decrease from the modal value, or vice versa. We combined field observations, statistical analysis, and mathematical modeling to study the developmental basis of the variation in floral organ numbers among 50 species mainly from Ranunculaceae and several other families from core eudicots. We compared six hypothetical mechanisms and found that a modified error function reproduced much of the asymmetric variation found in eudicot floral organ numbers. The error function is derived from mathematical modeling of floral organ positioning, and its parameters represent measurable distances in the floral bud morphologies. The model predicts two developmental sources of the organ-number distributions: stochastic shifts in the expression boundaries of homeotic genes and a semi-concentric (whorled-type) organ arrangement. Other models species- or organ-specifically reproduced different types of distributions that reflect different developmental processes. The organ-number variation could be an indicator of stochasticity in organ fate determination and organ positioning. PMID:25404932
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Studying Resist Stochastics with the Multivariate Poisson Propagation Model
Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...
2014-01-01
Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
Phase-Space Transport of Stochastic Chaos in Population Dynamics of Virus Spread
NASA Astrophysics Data System (ADS)
Billings, Lora; Bollt, Erik M.; Schwartz, Ira B.
2002-06-01
A general way to classify stochastic chaos is presented and applied to population dynamics models. A stochastic dynamical theory is used to develop an algorithmic tool to measure the transport across basin boundaries and predict the most probable regions of transport created by noise. The results of this tool are illustrated on a model of virus spread in a large population, where transport regions reveal how noise completes the necessary manifold intersections for the creation of emerging stochastic chaos.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott Hara
2000-02-18
The project involves using advanced reservoir characterization and thermal production technologies to improve thermal recovery techniques and lower operating and capital costs in a slope and basin clastic (SBC) reservoir in the Wilmington field, Los Angeles Co., CA. Through March 1999, project work has been completed related to data preparation, basic reservoir engineering, developing a deterministic three dimensional (3-D) geologic model, a 3-D deterministic reservoir simulation model, and a rock-log model, well drilling and completions, and surface facilities. Work is continuing on the stochastic geologic model, developing a 3-D stochastic thermal reservoir simulation model of the Fault Block IIA Tarmore » (Tar II-A) Zone, and operational work and research studies to prevent thermal-related formation compaction. Thermal-related formation compaction is a concern of the project team due to observed surface subsidence in the local area above the steamflood project. Last quarter on January 12, the steamflood project lost its inexpensive steam source from the Harbor Cogeneration Plant as a result of the recent deregulation of electrical power rates in California. An operational plan was developed and implemented to mitigate the effects of the two situations. Seven water injection wells were placed in service in November and December 1998 on the flanks of the Phase 1 steamflood area to pressure up the reservoir to fill up the existing steam chest. Intensive reservoir engineering and geomechanics studies are continuing to determine the best ways to shut down the steamflood operations in Fault Block II while minimizing any future surface subsidence. The new 3-D deterministic thermal reservoir simulator model is being used to provide sensitivity cases to optimize production, steam injection, future flank cold water injection and reservoir temperature and pressure. According to the model, reservoir fill up of the steam chest at the current injection rate of 28,000 BPD and gross and net oil production rates of 7,700 BPD and 750 BOPD (injection to production ratio of 4) will occur in October 1999. At that time, the reservoir should act more like a waterflood and production and cold water injection can be operated at lower net injection rates to be determined. Modeling runs developed this quarter found that varying individual well injection rates to meet added production and local pressure problems by sub-zone could reduce steam chest fill-up by up to one month.« less
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less
Stochastic-field cavitation model
NASA Astrophysics Data System (ADS)
Dumond, J.; Magagnato, F.; Class, A.
2013-07-01
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
A cavitation model based on Eulerian stochastic fields
NASA Astrophysics Data System (ADS)
Magagnato, F.; Dumond, J.
2013-12-01
Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic-field cavitation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less
Modeling of stochastic motion of bacteria propelled spherical microbeads
NASA Astrophysics Data System (ADS)
Arabagi, Veaceslav; Behkam, Bahareh; Cheung, Eugene; Sitti, Metin
2011-06-01
This work proposes a stochastic dynamic model of bacteria propelled spherical microbeads as potential swimming microrobotic bodies. Small numbers of S. marcescens bacteria are attached with their bodies to surfaces of spherical microbeads. Average-behavior stochastic models that are normally adopted when studying such biological systems are generally not effective for cases in which a small number of agents are interacting in a complex manner, hence a stochastic model is proposed to simulate the behavior of 8-41 bacteria assembled on a curved surface. Flexibility of the flagellar hook is studied via comparing simulated and experimental results for scenarios of increasing bead size and the number of attached bacteria on a bead. Although requiring more experimental data to yield an exact, certain flagellar hook stiffness value, the examined results favor a stiffer flagella. The stochastic model is intended to be used as a design and simulation tool for future potential targeted drug delivery and disease diagnosis applications of bacteria propelled microrobots.
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks
NASA Astrophysics Data System (ADS)
El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros
2007-12-01
The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...
2017-12-28
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Jae; Manuel, Lance; Churchfield, Matthew
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Stochastic 3D modeling of Ostwald ripening at ultra-high volume fractions of the coarsening phase
NASA Astrophysics Data System (ADS)
Spettl, A.; Wimmer, R.; Werz, T.; Heinze, M.; Odenbach, S.; Krill, C. E., III; Schmidt, V.
2015-09-01
We present a (dynamic) stochastic simulation model for 3D grain morphologies undergoing a grain coarsening phenomenon known as Ostwald ripening. For low volume fractions of the coarsening phase, the classical LSW theory predicts a power-law evolution of the mean particle size and convergence toward self-similarity of the particle size distribution; experiments suggest that this behavior holds also for high volume fractions. In the present work, we have analyzed 3D images that were recorded in situ over time in semisolid Al-Cu alloys manifesting ultra-high volume fractions of the coarsening (solid) phase. Using this information we developed a stochastic simulation model for the 3D morphology of the coarsening grains at arbitrary time steps. Our stochastic model is based on random Laguerre tessellations and is by definition self-similar—i.e. it depends only on the mean particle diameter, which in turn can be estimated at each point in time. For a given mean diameter, the stochastic model requires only three additional scalar parameters, which influence the distribution of particle sizes and their shapes. An evaluation shows that even with this minimal information the stochastic model yields an excellent representation of the statistical properties of the experimental data.
Inflow forecasting model construction with stochastic time series for coordinated dam operation
NASA Astrophysics Data System (ADS)
Kim, T.; Jung, Y.; Kim, H.; Heo, J. H.
2014-12-01
Dam inflow forecasting is one of the most important tasks in dam operation for an effective water resources management and control. In general, dam inflow forecasting with stochastic time series model is possible to apply when the data is stationary because most of stochastic process based on stationarity. However, recent hydrological data cannot be satisfied the stationarity anymore because of climate change. Therefore a stochastic time series model, which can consider seasonality and trend in the data series, named SARIMAX(Seasonal Autoregressive Integrated Average with eXternal variable) model were constructed in this study. This SARIMAX model could increase the performance of stochastic time series model by considering the nonstationarity components and external variable such as precipitation. For application, the models were constructed for four coordinated dams on Han river in South Korea with monthly time series data. As a result, the models of each dam have similar performance and it would be possible to use the model for coordinated dam operation.Acknowledgement This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-NH-12-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Juricke, Stephan; Jung, Thomas
2014-01-01
The influence of a stochastic sea ice strength parametrization on the mean climate is investigated in a coupled atmosphere–sea ice–ocean model. The results are compared with an uncoupled simulation with a prescribed atmosphere. It is found that the stochastic sea ice parametrization causes an effective weakening of the sea ice. In the uncoupled model this leads to an Arctic sea ice volume increase of about 10–20% after an accumulation period of approximately 20–30 years. In the coupled model, no such increase is found. Rather, the stochastic perturbations lead to a spatial redistribution of the Arctic sea ice thickness field. A mechanism involving a slightly negative atmospheric feedback is proposed that can explain the different responses in the coupled and uncoupled system. Changes in integrated Antarctic sea ice quantities caused by the stochastic parametrization are generally small, as memory is lost during the melting season because of an almost complete loss of sea ice. However, stochastic sea ice perturbations affect regional sea ice characteristics in the Southern Hemisphere, both in the uncoupled and coupled model. Remote impacts of the stochastic sea ice parametrization on the mean climate of non-polar regions were found to be small. PMID:24842027
The Higgs boson can delay reheating after inflation
NASA Astrophysics Data System (ADS)
Freese, Katherine; Sfakianakis, Evangelos I.; Stengel, Patrick; Visinelli, Luca
2018-05-01
The Standard Model Higgs boson, which has previously been shown to develop an effective vacuum expectation value during inflation, can give rise to large particle masses during inflation and reheating, leading to temporary blocking of the reheating process and a lower reheat temperature after inflation. We study the effects on the multiple stages of reheating: resonant particle production (preheating) as well as perturbative decays from coherent oscillations of the inflaton field. Specifically, we study both the cases of the inflaton coupling to Standard Model fermions through Yukawa interactions as well as to Abelian gauge fields through a Chern-Simons term. We find that, in the case of perturbative inflaton decay to SM fermions, reheating can be delayed due to Higgs blocking and the reheat temperature can decrease by up to an order of magnitude. In the case of gauge-reheating, Higgs-generated masses of the gauge fields can suppress preheating even for large inflaton-gauge couplings. In extreme cases, preheating can be shut down completely and must be substituted by perturbative decay as the dominant reheating channel. Finally, we discuss the distribution of reheat temperatures in different Hubble patches, arising from the stochastic nature of the Higgs VEV during inflation and its implications for the generation of both adiabatic and isocurvature fluctuations.
NASA Astrophysics Data System (ADS)
Panagiotopoulou, Antigoni; Bratsolis, Emmanuel; Charou, Eleni; Perantonis, Stavros
2017-10-01
The detailed three-dimensional modeling of buildings utilizing elevation data, such as those provided by light detection and ranging (LiDAR) airborne scanners, is increasingly demanded today. There are certain application requirements and available datasets to which any research effort has to be adapted. Our dataset includes aerial orthophotos, with a spatial resolution 20 cm, and a digital surface model generated from LiDAR, with a spatial resolution 1 m and an elevation resolution 20 cm, from an area of Athens, Greece. The aerial images are fused with LiDAR, and we classify these data with a multilayer feedforward neural network for building block extraction. The innovation of our approach lies in the preprocessing step in which the original LiDAR data are super-resolution (SR) reconstructed by means of a stochastic regularized technique before their fusion with the aerial images takes place. The Lorentzian estimator combined with the bilateral total variation regularization performs the SR reconstruction. We evaluate the performance of our approach against that of fusing unprocessed LiDAR data with aerial images. We present the classified images and the statistical measures confusion matrix, kappa coefficient, and overall accuracy. The results demonstrate that our approach predominates over that of fusing unprocessed LiDAR data with aerial images.
Chemical event chain model of coupled genetic oscillators.
Jörg, David J; Morelli, Luis G; Jülicher, Frank
2018-03-01
We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.
Chemical event chain model of coupled genetic oscillators
NASA Astrophysics Data System (ADS)
Jörg, David J.; Morelli, Luis G.; Jülicher, Frank
2018-03-01
We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.
Doubly stochastic Poisson process models for precipitation at fine time-scales
NASA Astrophysics Data System (ADS)
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
Stochastic modelling of intermittency.
Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram
2010-01-13
Recently, methods have been developed to model low-dimensional chaotic systems in terms of stochastic differential equations. We tested such methods in an electronic circuit experiment. We aimed to obtain reliable drift and diffusion coefficients even without a pronounced time-scale separation of the chaotic dynamics. By comparing the analytical solutions of the corresponding Fokker-Planck equation with experimental data, we show here that crisis-induced intermittency can be described in terms of a stochastic model which is dominated by state-space-dependent diffusion. Further on, we demonstrate and discuss some limits of these modelling approaches using numerical simulations. This enables us to state a criterion that can be used to decide whether a stochastic model will capture the essential features of a given time series. This journal is © 2010 The Royal Society
Low Frequency Predictive Skill Despite Structural Instability and Model Error
2014-09-30
Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online
Nontrivial periodic solution of a stochastic non-autonomous SISV epidemic model
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed
2016-11-01
In this paper, we consider a stochastic non-autonomous SISV epidemic model. For the non-autonomous periodic system, firstly, we get the threshold of the system which determines whether the epidemic occurs or not. Then in the case of persistence, we show that there exists at least one nontrivial positive periodic solution of the stochastic system.
Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.
2005-12-01
Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.
Appropriate Domain Size for Groundwater Flow Modeling with a Discrete Fracture Network Model.
Ji, Sung-Hoon; Koh, Yong-Kwon
2017-01-01
When a discrete fracture network (DFN) is constructed from statistical conceptualization, uncertainty in simulating the hydraulic characteristics of a fracture network can arise due to the domain size. In this study, the appropriate domain size, where less significant uncertainty in the stochastic DFN model is expected, was suggested for the Korea Atomic Energy Research Institute Underground Research Tunnel (KURT) site. The stochastic DFN model for the site was established, and the appropriate domain size was determined with the density of the percolating cluster and the percolation probability using the stochastically generated DFNs for various domain sizes. The applicability of the appropriate domain size to our study site was evaluated by comparing the statistical properties of stochastically generated fractures of varying domain sizes and estimating the uncertainty in the equivalent permeability of the generated DFNs. Our results show that the uncertainty of the stochastic DFN model is acceptable when the modeling domain is larger than the determined appropriate domain size, and the appropriate domain size concept is applicable to our study site. © 2016, National Ground Water Association.
A coupled stochastic rainfall-evapotranspiration model for hydrological impact analysis
NASA Astrophysics Data System (ADS)
Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.
2018-02-01
A hydrological impact analysis concerns the study of the consequences of certain scenarios on one or more variables or fluxes in the hydrological cycle. In such an exercise, discharge is often considered, as floods originating from extremely high discharges often cause damage. Investigating the impact of extreme discharges generally requires long time series of precipitation and evapotranspiration to be used to force a rainfall-runoff model. However, such kinds of data may not be available and one should resort to stochastically generated time series, even though the impact of using such data on the overall discharge, and especially on the extreme discharge events, is not well studied. In this paper, stochastically generated rainfall and corresponding evapotranspiration time series, generated by means of vine copulas, are used to force a simple conceptual hydrological model. The results obtained are comparable to the modelled discharge using observed forcing data. Yet, uncertainties in the modelled discharge increase with an increasing number of stochastically generated time series used. Notwithstanding this finding, it can be concluded that using a coupled stochastic rainfall-evapotranspiration model has great potential for hydrological impact analysis.
Stochastic simulations on a model of circadian rhythm generation.
Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin
2008-01-01
Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
NASA Astrophysics Data System (ADS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Stochastic modeling of experimental chaotic time series.
Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram
2007-01-26
Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.
Doubly stochastic Poisson processes in artificial neural learning.
Card, H C
1998-01-01
This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.
Stochastic receding horizon control: application to an octopedal robot
NASA Astrophysics Data System (ADS)
Shah, Shridhar K.; Tanner, Herbert G.
2013-06-01
Miniature autonomous systems are being developed under ARL's Micro Autonomous Systems and Technology (MAST). These systems can only be fitted with a small-size processor, and their motion behavior is inherently uncertain due to manufacturing and platform-ground interactions. One way to capture this uncertainty is through a stochastic model. This paper deals with stochastic motion control design and implementation for MAST- specific eight-legged miniature crawling robots, which have been kinematically modeled as systems exhibiting the behavior of a Dubin's car with stochastic noise. The control design takes the form of stochastic receding horizon control, and is implemented on a Gumstix Overo Fire COM with 720 MHz processor and 512 MB RAM, weighing 5.5 g. The experimental results show the effectiveness of this control law for miniature autonomous systems perturbed by stochastic noise.
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Dependence of Perpendicular Viscosity on Magnetic Fluctuations in a Stochastic Topology
NASA Astrophysics Data System (ADS)
Fridström, R.; Chapman, B. E.; Almagri, A. F.; Frassinetti, L.; Brunsell, P. R.; Nishizawa, T.; Sarff, J. S.
2018-06-01
In a magnetically confined plasma with a stochastic magnetic field, the dependence of the perpendicular viscosity on the magnetic fluctuation amplitude is measured for the first time. With a controlled, ˜ tenfold variation in the fluctuation amplitude, the viscosity increases ˜100 -fold, exhibiting the same fluctuation-amplitude-squared dependence as the predicted rate of stochastic field line diffusion. The absolute value of the viscosity is well predicted by a model based on momentum transport in a stochastic field, the first in-depth test of this model.
Simulating biological processes: stochastic physics from whole cells to colonies.
Earnest, Tyler M; Cole, John A; Luthey-Schulten, Zaida
2018-05-01
The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a 'minimal cell'. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.
Simulating biological processes: stochastic physics from whole cells to colonies
NASA Astrophysics Data System (ADS)
Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida
2018-05-01
The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.
Modeling a SI epidemic with stochastic transmission: hyperbolic incidence rate.
Christen, Alejandra; Maulén-Yañez, M Angélica; González-Olivares, Eduardo; Curé, Michel
2018-03-01
In this paper a stochastic susceptible-infectious (SI) epidemic model is analysed, which is based on the model proposed by Roberts and Saha (Appl Math Lett 12: 37-41, 1999), considering a hyperbolic type nonlinear incidence rate. Assuming the proportion of infected population varies with time, our new model is described by an ordinary differential equation, which is analogous to the equation that describes the double Allee effect. The limit of the solution of this equation (deterministic model) is found when time tends to infinity. Then, the asymptotic behaviour of a stochastic fluctuation due to the environmental variation in the coefficient of disease transmission is studied. Thus a stochastic differential equation (SDE) is obtained and the existence of a unique solution is proved. Moreover, the SDE is analysed through the associated Fokker-Planck equation to obtain the invariant measure when the proportion of the infected population reaches steady state. An explicit expression for invariant measure is found and we study some of its properties. The long time behaviour of deterministic and stochastic models are compared by simulations. According to our knowledge this incidence rate has not been previously used for this type of epidemic models.
Zimmer, Christoph
2016-01-01
Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models.
Stochastic Parameterization: Toward a New View of Weather and Climate Models
Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...
2017-03-31
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
Stochastic Parameterization: Toward a New View of Weather and Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berner, Judith; Achatz, Ulrich; Batté, Lauriane
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Huang, Wei; Shi, Jun; Yen, R T
2012-12-01
The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Problems of Mathematical Finance by Stochastic Control Methods
NASA Astrophysics Data System (ADS)
Stettner, Łukasz
The purpose of this paper is to present main ideas of mathematics of finance using the stochastic control methods. There is an interplay between stochastic control and mathematics of finance. On the one hand stochastic control is a powerful tool to study financial problems. On the other hand financial applications have stimulated development in several research subareas of stochastic control in the last two decades. We start with pricing of financial derivatives and modeling of asset prices, studying the conditions for the absence of arbitrage. Then we consider pricing of defaultable contingent claims. Investments in bonds lead us to the term structure modeling problems. Special attention is devoted to historical static portfolio analysis called Markowitz theory. We also briefly sketch dynamic portfolio problems using viscosity solutions to Hamilton-Jacobi-Bellman equation, martingale-convex analysis method or stochastic maximum principle together with backward stochastic differential equation. Finally, long time portfolio analysis for both risk neutral and risk sensitive functionals is introduced.
Stochastic models of the Social Security trust funds.
Burdick, Clark; Manchester, Joyce
Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.
Amerciamysis bahia Stochastic Matrix Population Model for Laboratory Populations
The population model described here is a stochastic, density-independent matrix model for integrating the effects of toxicants on survival and reproduction of the marine invertebrate, Americamysis bahia. The model was constructed using Microsoft® Excel 2003. The focus of the mode...
Mapping of the stochastic Lotka-Volterra model to models of population genetics and game theory
NASA Astrophysics Data System (ADS)
Constable, George W. A.; McKane, Alan J.
2017-08-01
The relationship between the M -species stochastic Lotka-Volterra competition (SLVC) model and the M -allele Moran model of population genetics is explored via timescale separation arguments. When selection for species is weak and the population size is large but finite, precise conditions are determined for the stochastic dynamics of the SLVC model to be mappable to the neutral Moran model, the Moran model with frequency-independent selection, and the Moran model with frequency-dependent selection (equivalently a game-theoretic formulation of the Moran model). We demonstrate how these mappings can be used to calculate extinction probabilities and the times until a species' extinction in the SLVC model.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
Resolution of ranking hierarchies in directed networks.
Letizia, Elisa; Barucca, Paolo; Lillo, Fabrizio
2018-01-01
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit.
Utilizing Big Data and Twitter to Discover Emergent Online Communities of Cannabis Users
Baumgartner, Peter; Peiper, Nicholas
2017-01-01
Large shifts in medical, recreational, and illicit cannabis consumption in the United States have implications for personalizing treatment and prevention programs to a wide variety of populations. As such, considerable research has investigated clinical presentations of cannabis users in clinical and population-based samples. Studies leveraging big data, social media, and social network analysis have emerged as a promising mechanism to generate timely insights that can inform treatment and prevention research. This study extends a novel method called stochastic block modeling to derive communities of cannabis consumers as part of a complex social network on Twitter. A set of examples illustrate how this method can ascertain candidate samples of medical, recreational, and illicit cannabis users. Implications for research planning, intervention design, and public health surveillance are discussed. PMID:28615950
Resolution of ranking hierarchies in directed networks
Barucca, Paolo; Lillo, Fabrizio
2018-01-01
Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278
Threshold for extinction and survival in stochastic tumor immune system
NASA Astrophysics Data System (ADS)
Li, Dongxi; Cheng, Fangjuan
2017-10-01
This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Identification and stochastic control of helicopter dynamic modes
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Bar-Shalom, Y.
1983-01-01
A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.
Effect of sample volume on metastable zone width and induction time
NASA Astrophysics Data System (ADS)
Kubota, Noriaki
2012-04-01
The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
Dynamics of a stochastic cell-to-cell HIV-1 model with distributed delay
NASA Astrophysics Data System (ADS)
Ji, Chunyan; Liu, Qun; Jiang, Daqing
2018-02-01
In this paper, we consider a stochastic cell-to-cell HIV-1 model with distributed delay. Firstly, we show that there is a global positive solution of this model before exploring its long-time behavior. Then sufficient conditions for extinction of the disease are established. Moreover, we obtain sufficient conditions for the existence of an ergodic stationary distribution of the model by constructing a suitable stochastic Lyapunov function. The stationary distribution implies that the disease is persistent in the mean. Finally, we provide some numerical examples to illustrate theoretical results.
A stochastic chemostat model with an inhibitor and noise independent of population sizes
NASA Astrophysics Data System (ADS)
Sun, Shulin; Zhang, Xiaolu
2018-02-01
In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.
Dynamics of stochastic SEIS epidemic model with varying population size
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Wei, Fengying
2016-12-01
We introduce the stochasticity into a deterministic model which has state variables susceptible-exposed-infected with varying population size in this paper. The infected individuals could return into susceptible compartment after recovering. We show that the stochastic model possesses a unique global solution under building up a suitable Lyapunov function and using generalized Itô's formula. The densities of the exposed and infected tend to extinction when some conditions are being valid. Moreover, the conditions of persistence to a global solution are derived when the parameters are subject to some simple criteria. The stochastic model admits a stationary distribution around the endemic equilibrium, which means that the disease will prevail. To check the validity of the main results, numerical simulations are demonstrated as end of this contribution.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
Role of demographic stochasticity in a speciation model with sexual reproduction
NASA Astrophysics Data System (ADS)
Lafuerza, Luis F.; McKane, Alan J.
2016-03-01
Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localized clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizable effect, rendering the deterministic modeling insufficient to understand the phenotypic distribution.
Stochastic Ordering Using the Latent Trait and the Sum Score in Polytomous IRT Models.
ERIC Educational Resources Information Center
Hemker, Bas T.; Sijtsma, Klaas; Molenaar, Ivo W.; Junker, Brian W.
1997-01-01
Stochastic ordering properties are investigated for a broad class of item response theory (IRT) models for which the monotone likelihood ratio does not hold. A taxonomy is given for nonparametric and parametric models for polytomous models based on the hierarchical relationship between the models. (SLD)
Stochastic analysis of future vehicle populations
DOT National Transportation Integrated Search
1979-05-01
The purpose of this study was to build a stochastic model of future vehicle populations. Such a model can be used to investigate the uncertainties inherent in Future Vehicle Populations. The model, which is called the Future Automobile Population Sto...
Evidence-based Controls for Epidemics Using Spatio-temporal Stochastic Model as a Bayesian Framwork
USDA-ARS?s Scientific Manuscript database
The control of highly infectious diseases of agricultural and plantation crops and livestock represents a key challenge in epidemiological and ecological modelling, with implemented control strategies often being controversial. Mathematical models, including the spatio-temporal stochastic models con...
On the impact of a refined stochastic model for airborne LiDAR measurements
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig
2016-09-01
Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.
A kinetic theory for age-structured stochastic birth-death processes
NASA Astrophysics Data System (ADS)
Chou, Tom; Greenman, Chris
Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but they are structurally unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Conversely, current theories that include size-dependent population dynamics (e.g., carrying capacity) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a BBGKY-like hierarchy. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution. NSF.
Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit
2018-01-01
Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms. PMID:29670508
Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit
2018-01-01
Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO 2 ) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces
NASA Astrophysics Data System (ADS)
Vacaru, S. I.
2012-03-01
We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.
Mo Zhou; Joseph Buongiorno
2011-01-01
Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks
Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.
2015-01-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
Price sensitive demand with random sales price - a newsboy problem
NASA Astrophysics Data System (ADS)
Sankar Sana, Shib
2012-03-01
Up to now, many newsboy problems have been considered in the stochastic inventory literature. Some assume that stochastic demand is independent of selling price (p) and others consider the demand as a function of stochastic shock factor and deterministic sales price. This article introduces a price-dependent demand with stochastic selling price into the classical Newsboy problem. The proposed model analyses the expected average profit for a general distribution function of p and obtains an optimal order size. Finally, the model is discussed for various appropriate distribution functions of p and illustrated with numerical examples.
Adalsteinsson, David; McMillen, David; Elston, Timothy C
2004-03-08
Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Simple and Hierarchical Models for Stochastic Test Misgrading.
ERIC Educational Resources Information Center
Wang, Jianjun
1993-01-01
Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
Stochastic lattice model of synaptic membrane protein domains.
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A
2017-05-01
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.
Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason
2017-07-01
Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.
Chen, Bor-Sen; Yeh, Chin-Hsun
2017-12-01
We review current static and dynamic evolutionary game strategies of biological networks and discuss the lack of random genetic variations and stochastic environmental disturbances in these models. To include these factors, a population of evolving biological networks is modeled as a nonlinear stochastic biological system with Poisson-driven genetic variations and random environmental fluctuations (stimuli). To gain insight into the evolutionary game theory of stochastic biological networks under natural selection, the phenotypic robustness and network evolvability of noncooperative and cooperative evolutionary game strategies are discussed from a stochastic Nash game perspective. The noncooperative strategy can be transformed into an equivalent multi-objective optimization problem and is shown to display significantly improved network robustness to tolerate genetic variations and buffer environmental disturbances, maintaining phenotypic traits for longer than the cooperative strategy. However, the noncooperative case requires greater effort and more compromises between partly conflicting players. Global linearization is used to simplify the problem of solving nonlinear stochastic evolutionary games. Finally, a simple stochastic evolutionary model of a metabolic pathway is simulated to illustrate the procedure of solving for two evolutionary game strategies and to confirm and compare their respective characteristics in the evolutionary process. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Xingwang; Yuan, Sanling; Zhang, Tonghua
2018-06-01
Allee effect can interact with environment stochasticity and is active when population numbers are small. Our goal of this paper is to investigate such effect on population dynamics. More precisely, we develop and investigate a stochastic single species model with Allee effect under regime switching. We first prove the existence of global positive solution of the model. Then, we perform the survival analysis to seek sufficient conditions for the extinction, non-persistence in mean, persistence in mean and stochastic permanence. By constructing a suitable Lyapunov function, we show that the model is positive recurrent and ergodic. Our results indicate that the regime switching can suppress the extinction of the species. Finally, numerical simulations are carried out to illustrate the obtained theoretical results, where a real-life example is also discussed showing the inclusion of Allee effect in the model provides a better match to the data.
Zimmer, Christoph
2016-01-01
Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802
NASA Astrophysics Data System (ADS)
Liu, Xiangdong; Li, Qingze; Pan, Jianxin
2018-06-01
Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Stochastic volatility models and Kelvin waves
NASA Astrophysics Data System (ADS)
Lipton, Alex; Sepp, Artur
2008-08-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed
2018-01-01
In this paper, we develop and study a stochastic predator-prey model with stage structure for predator and Holling type II functional response. First of all, by constructing a suitable stochastic Lyapunov function, we establish sufficient conditions for the existence and uniqueness of an ergodic stationary distribution of the positive solutions to the model. Then, we obtain sufficient conditions for extinction of the predator populations in two cases, that is, the first case is that the prey population survival and the predator populations extinction; the second case is that all the prey and predator populations extinction. The existence of a stationary distribution implies stochastic weak stability. Numerical simulations are carried out to demonstrate the analytical results.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed
2018-06-01
In this paper, we develop and study a stochastic predator-prey model with stage structure for predator and Holling type II functional response. First of all, by constructing a suitable stochastic Lyapunov function, we establish sufficient conditions for the existence and uniqueness of an ergodic stationary distribution of the positive solutions to the model. Then, we obtain sufficient conditions for extinction of the predator populations in two cases, that is, the first case is that the prey population survival and the predator populations extinction; the second case is that all the prey and predator populations extinction. The existence of a stationary distribution implies stochastic weak stability. Numerical simulations are carried out to demonstrate the analytical results.
Didactic discussion of stochastic resonance effects and weak signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adair, R.K.
1996-12-01
A simple, paradigmatic, model is used to illustrate some general properties of effects subsumed under the label stochastic resonance. In particular, analyses of the transparent model show that (1) a small amount of noise added to a much larger signal can greatly increase the response to the signal, but (2) a weak signal added to much larger noise will not generate a substantial added response. The conclusions drawn from the model illustrate the general result that stochastic resonance effects do not provide an avenue for signals that are much smaller than noise to affect biology. A further analysis demonstrates themore » effects of small signals in the shifting of biologically important chemical equilibria under conditions where stochastic resonance effects are significant.« less
On Local Homogeneity and Stochastically Ordered Mixed Rasch Models
ERIC Educational Resources Information Center
Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg
2006-01-01
Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…
Kinetic theory of age-structured stochastic birth-death processes
NASA Astrophysics Data System (ADS)
Greenman, Chris D.; Chou, Tom
2016-01-01
Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but are unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Stochastic theories that treat semi-Markov age-dependent processes using, e.g., the Bellman-Harris equation do not resolve a population's age structure and are unable to quantify population-size dependencies. Conversely, current theories that include size-dependent population dynamics (e.g., mathematical models that include carrying capacity such as the logistic equation) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new, fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a Bogoliubov--Born--Green--Kirkwood--Yvon-like hierarchy. Explicit solutions are derived in three limits: no birth, no death, and steady state. These are then compared with their corresponding mean-field results. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution.
The critical domain size of stochastic population models.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2017-02-01
Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Stochastic models for regulatory networks of the genetic toggle switch.
Tian, Tianhai; Burrage, Kevin
2006-05-30
Bistability arises within a wide range of biological systems from the lambda phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.
Stochastic models for regulatory networks of the genetic toggle switch
Tian, Tianhai; Burrage, Kevin
2006-01-01
Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks. PMID:16714385
Garijo, N; Manzano, R; Osta, R; Perez, M A
2012-12-07
Cell migration and proliferation has been modelled in the literature as a process similar to diffusion. However, using diffusion models to simulate the proliferation and migration of cells tends to create a homogeneous distribution in the cell density that does not correlate to empirical observations. In fact, the mechanism of cell dispersal is not diffusion. Cells disperse by crawling or proliferation, or are transported in a moving fluid. The use of cellular automata, particle models or cell-based models can overcome this limitation. This paper presents a stochastic cellular automata model to simulate the proliferation, migration and differentiation of cells. These processes are considered as completely stochastic as well as discrete. The model developed was applied to predict the behaviour of in vitro cell cultures performed with adult muscle satellite cells. Moreover, non homogeneous distribution of cells has been observed inside the culture well and, using the above mentioned stochastic cellular automata model, we have been able to predict this heterogeneous cell distribution and compute accurate quantitative results. Differentiation was also incorporated into the computational simulation. The results predicted the myotube formation that typically occurs with adult muscle satellite cells. In conclusion, we have shown how a stochastic cellular automata model can be implemented and is capable of reproducing the in vitro behaviour of adult muscle satellite cells. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator.
González Ochoa, Héctor O; Perales, Gualberto Solís; Epstein, Irving R; Femat, Ricardo
2018-05-01
We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.
Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator
NASA Astrophysics Data System (ADS)
González Ochoa, Héctor O.; Perales, Gualberto Solís; Epstein, Irving R.; Femat, Ricardo
2018-05-01
We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.
Stochastic modeling of Lagrangian accelerations
NASA Astrophysics Data System (ADS)
Reynolds, Andy
2002-11-01
It is shown how Sawford's second-order Lagrangian stochastic model (Phys. Fluids A 3, 1577-1586, 1991) for fluid-particle accelerations can be combined with a model for the evolution of the dissipation rate (Pope and Chen, Phys. Fluids A 2, 1437-1449, 1990) to produce a Lagrangian stochastic model that is consistent with both the measured distribution of Lagrangian accelerations (La Porta et al., Nature 409, 1017-1019, 2001) and Kolmogorov's similarity theory. The later condition is found not to be satisfied when a constant dissipation rate is employed and consistency with prescribed acceleration statistics is enforced through fulfilment of a well-mixed condition.
Dynamical behavior of a stochastic SVIR epidemic model with vaccination
NASA Astrophysics Data System (ADS)
Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir
2017-10-01
In this paper, we investigate the dynamical behavior of SVIR models in random environments. Firstly, we show that if R0s < 1, the disease of stochastic autonomous SVIR model will die out exponentially; if R˜0s > 1, the disease will be prevail. Moreover, this system admits a unique stationary distribution and it is ergodic when R˜0s > 1. Results show that environmental white noise is helpful for disease control. Secondly, we give sufficient conditions for the existence of nontrivial periodic solutions to stochastic SVIR model with periodic parameters. Finally, numerical simulations validate the analytical results.
Decentralized Network Interdiction Games
2015-12-31
approach is termed as the sample average approximation ( SAA ) method, and theories on the asymptotic convergence to the original problem’s optimal...used in the SAA method’s convergence. While we provided detailed proof of such convergence in [P3], a side benefit of the proof is that it weakens the...conditions required when applying the general SAA approach to the block-structured stochastic programming problem 17. As the conditions known in the
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
MONALISA for stochastic simulations of Petri net models of biochemical systems.
Balazki, Pavel; Lindauer, Klaus; Einloft, Jens; Ackermann, Jörg; Koch, Ina
2015-07-10
The concept of Petri nets (PN) is widely used in systems biology and allows modeling of complex biochemical systems like metabolic systems, signal transduction pathways, and gene expression networks. In particular, PN allows the topological analysis based on structural properties, which is important and useful when quantitative (kinetic) data are incomplete or unknown. Knowing the kinetic parameters, the simulation of time evolution of such models can help to study the dynamic behavior of the underlying system. If the number of involved entities (molecules) is low, a stochastic simulation should be preferred against the classical deterministic approach of solving ordinary differential equations. The Stochastic Simulation Algorithm (SSA) is a common method for such simulations. The combination of the qualitative and semi-quantitative PN modeling and stochastic analysis techniques provides a valuable approach in the field of systems biology. Here, we describe the implementation of stochastic analysis in a PN environment. We extended MONALISA - an open-source software for creation, visualization and analysis of PN - by several stochastic simulation methods. The simulation module offers four simulation modes, among them the stochastic mode with constant firing rates and Gillespie's algorithm as exact and approximate versions. The simulator is operated by a user-friendly graphical interface and accepts input data such as concentrations and reaction rate constants that are common parameters in the biological context. The key features of the simulation module are visualization of simulation, interactive plotting, export of results into a text file, mathematical expressions for describing simulation parameters, and up to 500 parallel simulations of the same parameter sets. To illustrate the method we discuss a model for insulin receptor recycling as case study. We present a software that combines the modeling power of Petri nets with stochastic simulation of dynamic processes in a user-friendly environment supported by an intuitive graphical interface. The program offers a valuable alternative to modeling, using ordinary differential equations, especially when simulating single-cell experiments with low molecule counts. The ability to use mathematical expressions provides an additional flexibility in describing the simulation parameters. The open-source distribution allows further extensions by third-party developers. The software is cross-platform and is licensed under the Artistic License 2.0.
A Stochastic Differential Equation Model for the Spread of HIV amongst People Who Inject Drugs.
Liang, Yanfeng; Greenhalgh, David; Mao, Xuerong
2016-01-01
We introduce stochasticity into the deterministic differential equation model for the spread of HIV amongst people who inject drugs (PWIDs) studied by Greenhalgh and Hay (1997). This was based on the original model constructed by Kaplan (1989) which analyses the behaviour of HIV/AIDS amongst a population of PWIDs. We derive a stochastic differential equation (SDE) for the fraction of PWIDs who are infected with HIV at time. The stochasticity is introduced using the well-known standard technique of parameter perturbation. We first prove that the resulting SDE for the fraction of infected PWIDs has a unique solution in (0, 1) provided that some infected PWIDs are initially present and next construct the conditions required for extinction and persistence. Furthermore, we show that there exists a stationary distribution for the persistence case. Simulations using realistic parameter values are then constructed to illustrate and support our theoretical results. Our results provide new insight into the spread of HIV amongst PWIDs. The results show that the introduction of stochastic noise into a model for the spread of HIV amongst PWIDs can cause the disease to die out in scenarios where deterministic models predict disease persistence.
NASA Astrophysics Data System (ADS)
Agliardi, Federico; Galletti, Laura; Riva, Federico; Zanchi, Andrea; Crosta, Giovanni B.
2017-04-01
An accurate characterization of the geometry and intensity of discontinuities in a rock mass is key to assess block size distribution and degree of freedom. These are the main controls on the magnitude and mechanisms of rock slope instabilities (structurally-controlled, step-path or mass failures) and rock mass strength and deformability. Nevertheless, the use of over-simplified discontinuity characterization approaches, unable to capture the stochastic nature of discontinuity features, often hampers a correct identification of dominant rock mass behaviour. Discrete Fracture Network (DFN) modelling tools have provided new opportunities to overcome these caveats. Nevertheless, their ability to provide a representative picture of reality strongly depends on the quality and scale of field data collection. Here we used DFN modelling with FracmanTM to investigate the influence of fracture intensity, characterized on different scales and with different techniques, on the geometry and size distribution of generated blocks, in a rock slope stability perspective. We focused on a test site near Lecco (Southern Alps, Italy), where 600 m high cliffs in thickly-bedded limestones folded at the slope scale impend on the Lake Como. We characterized the 3D slope geometry by Structure-from-Motion photogrammetry (range: 150-1500m; point cloud density > 50 pts/m2). Since the nature and attributes of discontinuities are controlled by brittle failure processes associated to large-scale folding, we performed a field characterization of meso-structural features (faults and related kinematics, vein and joint associations) in different fold domains. We characterized the discontinuity populations identified by structural geology on different spatial scales ranging from outcrops (field surveys and photo-mapping) to large slope sectors (point cloud and photo-mapping). For each sampling domain, we characterized discontinuity orientation statistics and performed fracture mapping and circular window analyses in order to measure fracture intensity (P21) and persistence (trace length distributions). Then, we calibrated DFN models for different combinations of P21/P32 and trace length distributions, characteristic of data collected on different scale. Comparing fracture patterns and block size distributions obtained from different models, we outline the strong influence of field data quality and scale on the rock mass behaviours predicted by DFN. We show that accounting for small scale features (close but short fractures) results in smaller but more interconnected blocks, eventually characterized by low removability and partly supported by intact rock strength. On the other hand, DFN based on data surveyed on slope scale enhance the structural control of persistent fracture on the kinematic degree-of freedom of medium-sized blocks, with significant impacts on the selection and parametrization of rock slope stability modelling approaches.
A study about the existence of the leverage effect in stochastic volatility models
NASA Astrophysics Data System (ADS)
Florescu, Ionuţ; Pãsãricã, Cristian Gabriel
2009-02-01
The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa. Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect. In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.
Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M
2017-10-01
Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Performance of stochastic approaches for forecasting river water quality.
Ahmad, S; Khan, I H; Parida, B P
2001-12-01
This study analysed water quality data collected from the river Ganges in India from 1981 to 1990 for forecasting using stochastic models. Initially the box and whisker plots and Kendall's tau test were used to identify the trends during the study period. For detecting the possible intervention in the data the time series plots and cusum charts were used. The three approaches of stochastic modelling which account for the effect of seasonality in different ways. i.e. multiplicative autoregressive integrated moving average (ARIMA) model. deseasonalised model and Thomas-Fiering model were used to model the observed pattern in water quality. The multiplicative ARIMA model having both nonseasonal and seasonal components were, in general, identified as appropriate models. In the deseasonalised modelling approach, the lower order ARIMA models were found appropriate for the stochastic component. The set of Thomas-Fiering models were formed for each month for all water quality parameters. These models were then used to forecast the future values. The error estimates of forecasts from the three approaches were compared to identify the most suitable approach for the reliable forecast. The deseasonalised modelling approach was recommended for forecasting of water quality parameters of a river.
Stochastic bifurcation in a model of love with colored noise
NASA Astrophysics Data System (ADS)
Yue, Xiaokui; Dai, Honghua; Yuan, Jianping
2015-07-01
In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.
The threshold of a stochastic delayed SIR epidemic model with vaccination
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing
2016-11-01
In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Mejlholm, Ole; Bøknæs, Niels; Dalgaard, Paw
2015-02-01
A new stochastic model for the simultaneous growth of Listeria monocytogenes and lactic acid bacteria (LAB) was developed and validated on data from naturally contaminated samples of cold-smoked Greenland halibut (CSGH) and cold-smoked salmon (CSS). During industrial processing these samples were added acetic and/or lactic acids. The stochastic model was developed from an existing deterministic model including the effect of 12 environmental parameters and microbial interaction (O. Mejlholm and P. Dalgaard, Food Microbiology, submitted for publication). Observed maximum population density (MPD) values of L. monocytogenes in naturally contaminated samples of CSGH and CSS were accurately predicted by the stochastic model based on measured variability in product characteristics and storage conditions. Results comparable to those from the stochastic model were obtained, when product characteristics of the least and most preserved sample of CSGH and CSS were used as input for the existing deterministic model. For both modelling approaches, it was shown that lag time and the effect of microbial interaction needs to be included to accurately predict MPD values of L. monocytogenes. Addition of organic acids to CSGH and CSS was confirmed as a suitable mitigation strategy against the risk of growth by L. monocytogenes as both types of products were in compliance with the EU regulation on ready-to-eat foods. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Hybrid Stochastic-Neuro-Fuzzy Model-Based System for In-Flight Gas Turbine Engine Diagnostics
2001-04-05
Margin (ADM) and (ii) Fault Detection Margin (FDM). Key Words: ANFIS, Engine Health Monitoring , Gas Path Analysis, and Stochastic Analysis Adaptive Network...The paper illustrates the application of a hybrid Stochastic- Fuzzy -Inference Model-Based System (StoFIS) to fault diagnostics and prognostics for both...operational history monitored on-line by the engine health management (EHM) system. To capture the complex functional relationships between different
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granita, E-mail: granitafc@gmail.com; Bahar, A.
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Inducing Tropical Cyclones to Undergo Brownian Motion
NASA Astrophysics Data System (ADS)
Hodyss, D.; McLay, J.; Moskaitis, J.; Serra, E.
2014-12-01
Stochastic parameterization has become commonplace in numerical weather prediction (NWP) models used for probabilistic prediction. Here, a specific stochastic parameterization will be related to the theory of stochastic differential equations and shown to be affected strongly by the choice of stochastic calculus. From an NWP perspective our focus will be on ameliorating a common trait of the ensemble distributions of tropical cyclone (TC) tracks (or position), namely that they generally contain a bias and an underestimate of the variance. With this trait in mind we present a stochastic track variance inflation parameterization. This parameterization makes use of a properly constructed stochastic advection term that follows a TC and induces its position to undergo Brownian motion. A central characteristic of Brownian motion is that its variance increases with time, which allows for an effective inflation of an ensemble's TC track variance. Using this stochastic parameterization we present a comparison of the behavior of TCs from the perspective of the stochastic calculi of Itô and Stratonovich within an operational NWP model. The central difference between these two perspectives as pertains to TCs is shown to be properly predicted by the stochastic calculus and the Itô correction. In the cases presented here these differences will manifest as overly intense TCs, which, depending on the strength of the forcing, could lead to problems with numerical stability and physical realism.
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin
2017-03-01
A novel approach for integrating a pseudo-two dimensional electrochemical thermal (P2D-ECT) model and data assimilation algorithm is presented for lithium-ion cell state estimation. This approach refrains from making any simplifications in the P2D-ECT model while making it amenable for online state estimation. Though deterministic, uncertainty in the initial states induces stochasticity in the P2D-ECT model. This stochasticity is resolved by spectrally projecting the stochastic P2D-ECT model on a set of orthogonal multivariate Hermite polynomials. Volume averaging in the stochastic dimensions is proposed for efficient numerical solution of the resultant model. A state estimation framework is developed using a transformation of the orthogonal basis to assimilate the measurables with this system of equations. Effectiveness of the proposed method is first demonstrated by assimilating the cell voltage and temperature data generated using a synthetic test bed. This validated method is used with the experimentally observed cell voltage and temperature data for state estimation at different operating conditions and drive cycle protocols. The results show increased prediction accuracy when the data is assimilated every 30s. High accuracy of the estimated states is exploited to infer temperature dependent behavior of the lithium-ion cell.
Stochastic simulation of the spray formation assisted by a high pressure
NASA Astrophysics Data System (ADS)
Gorokhovski, M.; Chtab-Desportes, A.; Voloshina, I.; Askarova, A.
2010-03-01
The stochastic model of spray formation in the vicinity of the injector and in the far-field has been described and assessed by comparison with measurements in Diesel-like conditions. In the proposed mesh-free approach, the 3D configuration of continuous liquid core is simulated stochastically by ensemble of spatial trajectories of the specifically introduced stochastic particles. The parameters of the stochastic process are presumed from the physics of primary atomization. The spray formation model consists in computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region. This model is combined with KIVA II computation of atomizing Diesel spray in two-ways. First, simultaneously with the gas phase RANS computation, the ensemble of stochastic particles is tracking and the probability field of their positions is calculated, which is used for sampling of initial locations of primary blobs. Second, the velocity increment of the gas due to the liquid injection is computed from the mean volume fraction of the simulated liquid core. Two novelties are proposed in the secondary atomization modeling. The first one is due to unsteadiness of the injection velocity. When the injection velocity increment in time is decreasing, the supplementary breakup may be induced. Therefore the critical Weber number is based on such increment. Second, a new stochastic model of the secondary atomization is proposed, in which the intermittent turbulent stretching is taken into account as the main mechanism. The measurements reported by Arcoumanis et al. (time-history of the mean axial centre-line velocity of droplet, and of the centre-line Sauter Mean Diameter), are compared with computations.
Option pricing, stochastic volatility, singular dynamics and constrained path integrals
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Hojman, Sergio A.
2014-01-01
Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.
Generalised filtering and stochastic DCM for fMRI.
Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl
2011-09-15
This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.
Stochastic Models of Quality Control on Test Misgrading.
ERIC Educational Resources Information Center
Wang, Jianjun
Stochastic models are developed in this article to examine the rate of test misgrading in educational and psychological measurement. The estimation of inadvertent grading errors can serve as a basis for quality control in measurement. Limitations of traditional Poisson models have been reviewed to highlight the need to introduce new models using…
The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.
Niu, Yuanling; Wang, Yue; Zhou, Da
2015-12-07
The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Stochastic hybrid systems for studying biochemical processes.
Singh, Abhyudai; Hespanha, João P
2010-11-13
Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Statement Verification: A Stochastic Model of Judgment and Response.
ERIC Educational Resources Information Center
Wallsten, Thomas S.; Gonzalez-Vallejo, Claudia
1994-01-01
A stochastic judgment model (SJM) is presented as a framework for addressing issues in statement verification and probability judgment. Results of 5 experiments with 264 undergraduates support the validity of the model and provide new information that is interpreted in terms of the SJM. (SLD)
A stochastic method for stand-alone photovoltaic system sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio
Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less
Deterministic and stochastic bifurcations in the Hindmarsh-Rose neuronal model
NASA Astrophysics Data System (ADS)
Dtchetgnia Djeundam, S. R.; Yamapi, R.; Kofane, T. C.; Aziz-Alaoui, M. A.
2013-09-01
We analyze the bifurcations occurring in the 3D Hindmarsh-Rose neuronal model with and without random signal. When under a sufficient stimulus, the neuron activity takes place; we observe various types of bifurcations that lead to chaotic transitions. Beside the equilibrium solutions and their stability, we also investigate the deterministic bifurcation. It appears that the neuronal activity consists of chaotic transitions between two periodic phases called bursting and spiking solutions. The stochastic bifurcation, defined as a sudden change in character of a stochastic attractor when the bifurcation parameter of the system passes through a critical value, or under certain condition as the collision of a stochastic attractor with a stochastic saddle, occurs when a random Gaussian signal is added. Our study reveals two kinds of stochastic bifurcation: the phenomenological bifurcation (P-bifurcations) and the dynamical bifurcation (D-bifurcations). The asymptotical method is used to analyze phenomenological bifurcation. We find that the neuronal activity of spiking and bursting chaos remains for finite values of the noise intensity.
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.
2017-12-01
Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Approximation methods of European option pricing in multiscale stochastic volatility model
NASA Astrophysics Data System (ADS)
Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.
Optimal Stochastic Modeling and Control of Flexible Structures
1988-09-01
1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic
Multivariate moment closure techniques for stochastic kinetic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.
2015-09-07
Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less
A Bayesian estimation of a stochastic predator-prey model of economic fluctuations
NASA Astrophysics Data System (ADS)
Dibeh, Ghassan; Luchinsky, Dmitry G.; Luchinskaya, Daria D.; Smelyanskiy, Vadim N.
2007-06-01
In this paper, we develop a Bayesian framework for the empirical estimation of the parameters of one of the best known nonlinear models of the business cycle: The Marx-inspired model of a growth cycle introduced by R. M. Goodwin. The model predicts a series of closed cycles representing the dynamics of labor's share and the employment rate in the capitalist economy. The Bayesian framework is used to empirically estimate a modified Goodwin model. The original model is extended in two ways. First, we allow for exogenous periodic variations of the otherwise steady growth rates of the labor force and productivity per worker. Second, we allow for stochastic variations of those parameters. The resultant modified Goodwin model is a stochastic predator-prey model with periodic forcing. The model is then estimated using a newly developed Bayesian estimation method on data sets representing growth cycles in France and Italy during the years 1960-2005. Results show that inference of the parameters of the stochastic Goodwin model can be achieved. The comparison of the dynamics of the Goodwin model with the inferred values of parameters demonstrates quantitative agreement with the growth cycle empirical data.
The Impact of STTP on the GEFS Forecast of Week-2 and Beyond in the Presence of Stochastic Physics
NASA Astrophysics Data System (ADS)
Hou, D.
2015-12-01
The Stochastic Total Tendency Perturbation (STTP) scheme was designed to represent the model related uncertainties not considered in the numerical model itself and the physics based stochastic schemes. It has been applied in NCEP's Global Ensemble Forecast System (GEFS) since 2010, showing significant positive impacts on the forecast with improved spread-error ratio and probabilistic forecast skills. The scheme is robust and it went well with the resolution increases and model improvements in 2012 and 2015 with minimum changes. Recently, a set of stochastic physics schemes are coded in the Global Forecast System model and tested in the GEFS package. With these schemes turned on and STTP off, the forecast performance is comparable or even superior to the operational GEFS, in which STTP is the only contributor to the model related uncertainties. This is true especially in week one. However, over the second week and beyond, both the experimental and the operational GEFS has insufficient spread, especially over the warmer seasons. This is a major challenge when the GEFS is extended to sub-seasonal (week 4-6) time scales. The impact of STTP on the GEFS forecast in the presence of stochastic physics is investigated by turning both the stochastic physics schemes and STTP on and carefully tuning their amplitudes. Analysis will be focused on the forecast of extended range, especially week 2. Its impacts on week 3-4 will also be addressed.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
Random graph models for dynamic networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.
2017-10-01
Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.
USDA-ARS?s Scientific Manuscript database
To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...
NASA Astrophysics Data System (ADS)
Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra
2016-02-01
In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1975-01-01
This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.
Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.
Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai
2006-09-08
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
NASA Astrophysics Data System (ADS)
Burgos, C.; Cortés, J.-C.; Shaikhet, L.; Villanueva, R.-J.
2018-11-01
First, we propose a deterministic age-structured epidemiological model to study the diffusion of e-commerce in Spain. Afterwards, we determine the parameters (death, birth and growth rates) of the underlying demographic model as well as the parameters (transmission of the use of e-commerce rates) of the proposed epidemiological model that best fit real data retrieved from the Spanish National Statistical Institute. Motivated by the two following facts: first the dynamics of acquiring the use of a new technology as e-commerce is mainly driven by the feedback after interacting with our peers (family, friends, mates, mass media, etc.), hence having a certain delay, and second the inherent uncertainty of sampled real data and the social complexity of the phenomena under analysis, we introduce aftereffect and stochastic perturbations in the initial deterministic model. This leads to a delayed stochastic model for e-commerce. We then investigate sufficient conditions in order to guarantee the stability in probability of the equilibrium point of the dynamic e-commerce delayed stochastic model. Our theoretical findings are numerically illustrated using real data.
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Schwarz, Friedrich W.; van Aelst, Kara; Tóth, Júlia; Seidel, Ralf; Szczelkun, Mark D.
2011-01-01
DNA cleavage by the Type III Restriction–Modification enzymes requires communication in 1D between two distant indirectly-repeated recognitions sites, yet results in non-specific dsDNA cleavage close to only one of the two sites. To test a recently proposed ATP-triggered DNA sliding model, we addressed why one site is selected over another during cleavage. We examined the relative cleavage of a pair of identical sites on DNA substrates with different distances to a free or protein blocked end, and on a DNA substrate using different relative concentrations of protein. Under these conditions a bias can be induced in the cleavage of one site over the other. Monte-Carlo simulations based on the sliding model reproduce the experimentally observed behaviour. This suggests that cleavage site selection simply reflects the dynamics of the preceding stochastic enzyme events that are consistent with bidirectional motion in 1D and DNA cleavage following head-on protein collision. PMID:21724613
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Using stochastic models to incorporate spatial and temporal variability [Exercise 14
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2009-01-01
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
1982-11-01
D- R136 495 RETURN DIFFERENCE FEEDBACK DESIGN FOR ROBUSTj/ UNCERTAINTY TOLERANCE IN STO..(U) UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES DEPT OF...State and ZIP Code) 7. b6 ADORESS (City. Staft and ZIP Code) Department of Electrical Engineering -’M Directorate of Mathematical & Information Systems ...13. SUBJECT TERMS Continur on rverse ineeesaty and identify by block nmber) FIELD GROUP SUE. GR. Systems theory; control; feedback; automatic control
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo
2010-11-01
Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.
Stochastic Processes in Physics: Deterministic Origins and Control
NASA Astrophysics Data System (ADS)
Demers, Jeffery
Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.
Asymptotic behavior of a stochastic delayed HIV-1 infection model with nonlinear incidence
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir
2017-11-01
In this paper, a stochastic delayed HIV-1 infection model with nonlinear incidence is proposed and investigated. First of all, we prove that there is a unique global positive solution as desired in any population dynamics. Then by constructing some suitable Lyapunov functions, we show that if the basic reproduction number R0 ≤ 1, then the solution of the stochastic system oscillates around the infection-free equilibrium E0, while if R0 > 1, then the solution of the stochastic system fluctuates around the infective equilibrium E∗. Sufficient conditions of these results are established. Finally, we give some examples and a series of numerical simulations to illustrate the analytical results.
Multi-Topic Tracking Model for dynamic social network
NASA Astrophysics Data System (ADS)
Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun
2016-07-01
The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.
DOT National Transportation Integrated Search
2017-07-04
This paper presents a stochastic multi-agent optimization model that supports energy infrastruc- : ture planning under uncertainty. The interdependence between dierent decision entities in the : system is captured in an energy supply chain network, w...
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
Cash transportation vehicle routing and scheduling under stochastic travel times
NASA Astrophysics Data System (ADS)
Yan, Shangyao; Wang, Sin-Siang; Chang, Yu-Hsuan
2014-03-01
Stochastic disturbances occurring in real-world operations could have a significant influence on the planned routing and scheduling results of cash transportation vehicles. In this study, a time-space network flow technique is utilized to construct a cash transportation vehicle routing and scheduling model incorporating stochastic travel times. In addition, to help security carriers to formulate more flexible routes and schedules, a concept of the similarity of time and space for vehicle routing and scheduling is incorporated into the model. The test results show that the model could be useful for security carriers in actual practice.
Gryphon: A Hybrid Agent-Based Modeling and Simulation Platform for Infectious Diseases
NASA Astrophysics Data System (ADS)
Yu, Bin; Wang, Jijun; McGowan, Michael; Vaidyanathan, Ganesh; Younger, Kristofer
In this paper we present Gryphon, a hybrid agent-based stochastic modeling and simulation platform developed for characterizing the geographic spread of infectious diseases and the effects of interventions. We study both local and non-local transmission dynamics of stochastic simulations based on the published parameters and data for SARS. The results suggest that the expected numbers of infections and the timeline of control strategies predicted by our stochastic model are in reasonably good agreement with previous studies. These preliminary results indicate that Gryphon is able to characterize other future infectious diseases and identify endangered regions in advance.
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong
2018-01-01
The FitzHugh–Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity. PMID:29467642
Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong
2018-01-01
The FitzHugh-Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity.
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.
Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio
2010-03-26
Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
Analysis of the stochastic excitability in the flow chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashkirtseva, Irina
2015-11-30
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Oscillatory Regulation of Hes1: Discrete Stochastic Delay Modelling and Simulation
Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai
2006-01-01
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein. PMID:16965175
Analysis of the stochastic excitability in the flow chemical reactor
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-11-01
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
A damage analysis for brittle materials using stochastic micro-structural information
NASA Astrophysics Data System (ADS)
Lin, Shih-Po; Chen, Jiun-Shyan; Liang, Shixue
2016-03-01
In this work, a micro-crack informed stochastic damage analysis is performed to consider the failures of material with stochastic microstructure. The derivation of the damage evolution law is based on the Helmholtz free energy equivalence between cracked microstructure and homogenized continuum. The damage model is constructed under the stochastic representative volume element (SRVE) framework. The characteristics of SRVE used in the construction of the stochastic damage model have been investigated based on the principle of the minimum potential energy. The mesh dependency issue has been addressed by introducing a scaling law into the damage evolution equation. The proposed methods are then validated through the comparison between numerical simulations and experimental observations of a high strength concrete. It is observed that the standard deviation of porosity in the microstructures has stronger effect on the damage states and the peak stresses than its effect on the Young's and shear moduli in the macro-scale responses.
Evolutionary stability concepts in a stochastic environment
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2017-09-01
Over the past 30 years, evolutionary game theory and the concept of an evolutionarily stable strategy have been not only extensively developed and successfully applied to explain the evolution of animal behaviors, but also widely used in economics and social sciences. Nonetheless, the stochastic dynamical properties of evolutionary games in randomly fluctuating environments are still unclear. In this study, we investigate conditions for stochastic local stability of fixation states and constant interior equilibria in a two-phenotype model with random payoffs following pairwise interactions. Based on this model, we develop the concepts of stochastic evolutionary stability (SES) and stochastic convergence stability (SCS). We show that the condition for a pure strategy to be SES and SCS is more stringent than in a constant environment, while the condition for a constant mixed strategy to be SES is less stringent than the condition to be SCS, which is less stringent than the condition in a constant environment.
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
Modelisation de l'historique d'operation de groupes turbine-alternateur
NASA Astrophysics Data System (ADS)
Szczota, Mickael
Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap
An Anatomically Constrained, Stochastic Model of Eye Movement Control in Reading
ERIC Educational Resources Information Center
McDonald, Scott A.; Carpenter, R. H. S.; Shillcock, Richard C.
2005-01-01
This article presents SERIF, a new model of eye movement control in reading that integrates an established stochastic model of saccade latencies (LATER; R. H. S. Carpenter, 1981) with a fundamental anatomical constraint on reading: the vertically split fovea and the initial projection of information in either visual field to the contralateral…
Comparison of holstein and jersey milk production with a new stochastic animal reproduction model
USDA-ARS?s Scientific Manuscript database
Holsteins and Jerseys are the most popular breeds in the US dairy industry. We built a stochastic, Monte Carlo life events simulation model in Python to test if Jersey cattle’s higher conception rate offsets their lower milk production. The model simulates individual cows and their life events such ...
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Health safety nets can break cycles of poverty and disease: a stochastic ecological model.
Plucinski, Mateusz M; Ngonghala, Calistus N; Bonds, Matthew H
2011-12-07
The persistence of extreme poverty is increasingly attributed to dynamic interactions between biophysical processes and economics, though there remains a dearth of integrated theoretical frameworks that can inform policy. Here, we present a stochastic model of disease-driven poverty traps. Whereas deterministic models can result in poverty traps that can only be broken by substantial external changes to the initial conditions, in the stochastic model there is always some probability that a population will leave or enter a poverty trap. We show that a 'safety net', defined as an externally enforced minimum level of health or economic conditions, can guarantee ultimate escape from a poverty trap, even if the safety net is set within the basin of attraction of the poverty trap, and even if the safety net is only in the form of a public health measure. Whereas the deterministic model implies that small improvements in initial conditions near the poverty-trap equilibrium are futile, the stochastic model suggests that the impact of changes in the location of the safety net on the rate of development may be strongest near the poverty-trap equilibrium.
Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia
2017-11-23
Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.
Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture
NASA Astrophysics Data System (ADS)
Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong
The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.
Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.
Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H
2018-03-29
Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Gene regulation and noise reduction by coupling of stochastic processes
NASA Astrophysics Data System (ADS)
Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John
2015-02-01
Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.
Gene regulation and noise reduction by coupling of stochastic processes
Hornos, José Eduardo M.; Reinitz, John
2015-01-01
Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447
Gene regulation and noise reduction by coupling of stochastic processes.
Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John
2015-02-01
Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.
Hybrid stochastic simulations of intracellular reaction-diffusion systems.
Kalantzis, Georgios
2009-06-01
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.
Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C
2006-02-28
We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.
Numerical simulations in stochastic mechanics
NASA Astrophysics Data System (ADS)
McClendon, Marvin; Rabitz, Herschel
1988-05-01
The stochastic differential equation of Nelson's stochastic mechanics is integrated numerically for several simple quantum systems. The calculations are performed with use of Helfand and Greenside's method and pseudorandom numbers. The resulting trajectories are analyzed both individually and collectively to yield insight into momentum, uncertainty principles, interference, tunneling, quantum chaos, and common models of diatomic molecules from the stochastic quantization point of view. In addition to confirming Shucker's momentum theorem, these simulations illustrate, within the context of stochastic mechanics, the position-momentum and time-energy uncertainty relations, the two-slit diffraction pattern, exponential decay of an unstable system, and the greater degree of anticorrelation in a valence-bond model as compared with a molecular-orbital model of H2. The attempt to find exponential divergence of initially nearby trajectories, potentially useful as a criterion for quantum chaos, in a periodically forced oscillator is inconclusive. A way of computing excited energies from the ground-state motion is presented. In all of these studies the use of particle trajectories allows a more insightful interpretation of physical phenomena than is possible within traditional wave mechanics.
The threshold of a stochastic SIQS epidemic model
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Huo, Hai-Feng; Xiang, Hong; Shi, Qihong; Li, Dungang
2017-09-01
In this paper, we present the threshold of a stochastic SIQS epidemic model which determines the extinction and persistence of the disease. Furthermore, we find that noise can suppress the disease outbreak. Numerical simulations are also carried out to confirm the analytical results.
NASA Astrophysics Data System (ADS)
Witte, L.
2014-06-01
To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.
Stochastic description of quantum Brownian dynamics
NASA Astrophysics Data System (ADS)
Yan, Yun-An; Shao, Jiushu
2016-08-01
Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems such as the dynamical description of quantum phase transition (local- ization) and the numerical stability of the trace-conserving, nonlinear stochastic Liouville equation are outlined.
Jamaludin, Ummu K; M Suhaimi, Fatanah; Abdul Razak, Normy Norfiza; Md Ralib, Azrina; Mat Nor, Mohd Basri; Pretty, Christopher G; Humaidi, Luqman
2018-08-01
Blood glucose variability is common in healthcare and it is not related or influenced by diabetes mellitus. To minimise the risk of high blood glucose in critically ill patients, Stochastic Targeted Blood Glucose Control Protocol is used in intensive care unit at hospitals worldwide. Thus, this study focuses on the performance of stochastic modelling protocol in comparison to the current blood glucose management protocols in the Malaysian intensive care unit. Also, this study is to assess the effectiveness of Stochastic Targeted Blood Glucose Control Protocol when it is applied to a cohort of diabetic patients. Retrospective data from 210 patients were obtained from a general hospital in Malaysia from May 2014 until June 2015, where 123 patients were having comorbid diabetes mellitus. The comparison of blood glucose control protocol performance between both protocol simulations was conducted through blood glucose fitted with physiological modelling on top of virtual trial simulations, mean calculation of simulation error and several graphical comparisons using stochastic modelling. Stochastic Targeted Blood Glucose Control Protocol reduces hyperglycaemia by 16% in diabetic and 9% in nondiabetic cohorts. The protocol helps to control blood glucose level in the targeted range of 4.0-10.0 mmol/L for 71.8% in diabetic and 82.7% in nondiabetic cohorts, besides minimising the treatment hour up to 71 h for 123 diabetic patients and 39 h for 87 nondiabetic patients. It is concluded that Stochastic Targeted Blood Glucose Control Protocol is good in reducing hyperglycaemia as compared to the current blood glucose management protocol in the Malaysian intensive care unit. Hence, the current Malaysian intensive care unit protocols need to be modified to enhance their performance, especially in the integration of insulin and nutrition intervention in decreasing the hyperglycaemia incidences. Improvement in Stochastic Targeted Blood Glucose Control Protocol in terms of u en model is also a must to adapt with the diabetic cohort. Copyright © 2018 Elsevier B.V. All rights reserved.
Stochastic nonlinear dynamics pattern formation and growth models
Yaroslavsky, Leonid P
2007-01-01
Stochastic evolutionary growth and pattern formation models are treated in a unified way in terms of algorithmic models of nonlinear dynamic systems with feedback built of a standard set of signal processing units. A number of concrete models is described and illustrated by numerous examples of artificially generated patterns that closely imitate wide variety of patterns found in the nature. PMID:17908341
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Stochastic Effects in Computational Biology of Space Radiation Cancer Risk
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Pluth, Janis; Harper, Jane; O'Neill, Peter
2007-01-01
Estimating risk from space radiation poses important questions on the radiobiology of protons and heavy ions. We are considering systems biology models to study radiation induced repair foci (RIRF) at low doses, in which less than one-track on average transverses the cell, and the subsequent DNA damage processing and signal transduction events. Computational approaches for describing protein regulatory networks coupled to DNA and oxidative damage sites include systems of differential equations, stochastic equations, and Monte-Carlo simulations. We review recent developments in the mathematical description of protein regulatory networks and possible approaches to radiation effects simulation. These include robustness, which states that regulatory networks maintain their functions against external and internal perturbations due to compensating properties of redundancy and molecular feedback controls, and modularity, which leads to general theorems for considering molecules that interact through a regulatory mechanism without exchange of matter leading to a block diagonal reduction of the connecting pathways. Identifying rate-limiting steps, robustness, and modularity in pathways perturbed by radiation damage are shown to be valid techniques for reducing large molecular systems to realistic computer simulations. Other techniques studied are the use of steady-state analysis, and the introduction of composite molecules or rate-constants to represent small collections of reactants. Applications of these techniques to describe spatial and temporal distributions of RIRF and cell populations following low dose irradiation are described.
NASA Astrophysics Data System (ADS)
Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou
2006-06-01
In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.
Dynamical crossover in a stochastic model of cell fate decision
NASA Astrophysics Data System (ADS)
Yamaguchi, Hiroki; Kawaguchi, Kyogo; Sagawa, Takahiro
2017-07-01
We study the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation. We propose a model of a self-replicating Langevin system, where cells choose their fate (i.e., proliferation or differentiation) depending on local cell density. Based on this model, we propose a scenario for multicellular organisms to maintain the density of cells (i.e., homeostasis) through finite-ranged cell-cell interactions. Furthermore, we numerically show that the distribution of the number of descendant cells changes over time, thus unifying the previously proposed two models regarding homeostasis: the critical birth death process and the voter model. Our results provide a general platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical mechanics.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580
ERIC Educational Resources Information Center
Weiss, Charles J.
2017-01-01
An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul
2017-12-01
This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.
Ultrasound Image Despeckling Using Stochastic Distance-Based BM3D.
Santos, Cid A N; Martins, Diego L N; Mascarenhas, Nelson D A
2017-06-01
Ultrasound image despeckling is an important research field, since it can improve the interpretability of one of the main categories of medical imaging. Many techniques have been tried over the years for ultrasound despeckling, and more recently, a great deal of attention has been focused on patch-based methods, such as non-local means and block-matching collaborative filtering (BM3D). A common idea in these recent methods is the measure of distance between patches, originally proposed as the Euclidean distance, for filtering additive white Gaussian noise. In this paper, we derive new stochastic distances for the Fisher-Tippett distribution, based on well-known statistical divergences, and use them as patch distance measures in a modified version of the BM3D algorithm for despeckling log-compressed ultrasound images. State-of-the-art results in filtering simulated, synthetic, and real ultrasound images confirm the potential of the proposed approach.
Secondary-Phase Stochastics in Lithium-Ion Battery Electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mistry, Aashutosh N.; Smith, Kandler; Mukherjee, Partha P.
Lithium-ion battery electrodes exhibit complex interplay among multiple electrochemically coupled transport processes, which rely on the underlying functionality and relative arrangement of different constituent phases. The electrochemically inactive solid phases (e.g., conductive additive and binder, referred to as the secondary phase), while beneficial for improved electronic conductivity and mechanical integrity, may partially block the electrochemically active sites and introduce additional transport resistances in the pore (electrolyte) phase. In this work, the role of mesoscale interactions and inherent stochasticity in porous electrodes is elucidated in the context of short-range (interface) and long-range (transport) characteristics. The electrode microstructure significantly affects kinetically andmore » transport-limiting scenarios and thereby the cell performance. The secondary-phase morphology is also found to strongly influence the microstructure-transport-kinetics interactions. Apropos, strategies have been proposed for performance improvement via electrode microstructural modifications.« less
Secondary-Phase Stochastics in Lithium-Ion Battery Electrodes
Mistry, Aashutosh N.; Smith, Kandler; Mukherjee, Partha P.
2018-01-12
Lithium-ion battery electrodes exhibit complex interplay among multiple electrochemically coupled transport processes, which rely on the underlying functionality and relative arrangement of different constituent phases. The electrochemically inactive solid phases (e.g., conductive additive and binder, referred to as the secondary phase), while beneficial for improved electronic conductivity and mechanical integrity, may partially block the electrochemically active sites and introduce additional transport resistances in the pore (electrolyte) phase. In this work, the role of mesoscale interactions and inherent stochasticity in porous electrodes is elucidated in the context of short-range (interface) and long-range (transport) characteristics. The electrode microstructure significantly affects kinetically andmore » transport-limiting scenarios and thereby the cell performance. The secondary-phase morphology is also found to strongly influence the microstructure-transport-kinetics interactions. Apropos, strategies have been proposed for performance improvement via electrode microstructural modifications.« less
Normal forms for reduced stochastic climate models
Majda, Andrew J.; Franzke, Christian; Crommelin, Daan
2009-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943
Vellela, Melissa; Qian, Hong
2009-10-06
Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.
Coupled stochastic soil moisture simulation-optimization model of deficit irrigation
NASA Astrophysics Data System (ADS)
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-07-01
This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.
On the Radio-emitting Particles of the Crab Nebula: Stochastic Acceleration Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Shuta J.; Asano, Katsuaki, E-mail: sjtanaka@center.konan-u.ac.jp
The broadband emission of pulsar wind nebulae (PWNe) is well described by non-thermal emissions from accelerated electrons and positrons. However, the standard shock acceleration model of PWNe does not account for the hard spectrum in radio wavelengths. The origin of the radio-emitting particles is also important to determine the pair production efficiency in the pulsar magnetosphere. Here, we propose a possible resolution for the particle energy distribution in PWNe; the radio-emitting particles are not accelerated at the pulsar wind termination shock but are stochastically accelerated by turbulence inside PWNe. We upgrade our past one-zone spectral evolution model to include themore » energy diffusion, i.e., the stochastic acceleration, and apply the model to the Crab Nebula. A fairly simple form of the energy diffusion coefficient is assumed for this demonstrative study. For a particle injection to the stochastic acceleration process, we consider the continuous injection from the supernova ejecta or the impulsive injection associated with supernova explosion. The observed broadband spectrum and the decay of the radio flux are reproduced by tuning the amount of the particle injected to the stochastic acceleration process. The acceleration timescale and the duration of the acceleration are required to be a few decades and a few hundred years, respectively. Our results imply that some unveiled mechanisms, such as back reaction to the turbulence, are required to make the energies of stochastically and shock-accelerated particles comparable.« less
Stochastic Simulation Using @ Risk for Dairy Business Investment Decisions
USDA-ARS?s Scientific Manuscript database
A dynamic, stochastic, mechanistic simulation model of a dairy business was developed to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm system within a partial budgeting fram...
ASSESSING RESIDENTIAL EXPOSURE USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL
As part of a workshop sponsored by the Environmental Protection Agency's Office of Research and Development and Office of Pesticide Programs, the Aggregate Stochastic Human Exposure and Dose Simulation (SHEDS) Model was used to assess potential aggregate residential pesticide e...
A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...
Stochastic Human Exposure and Dose Simulation for Air Toxics
The Stochastic Human Exposure and Dose Simulation model for Air Toxics (SHEDS-AirToxics) is a multimedia, multipathway population-based exposure and dose model for air toxics developed by the US EPA's National Exposure Research Laboratory (NERL). SHEDS-AirToxics uses a probabili...
Backward-stochastic-differential-equation approach to modeling of gene expression
NASA Astrophysics Data System (ADS)
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Backward-stochastic-differential-equation approach to modeling of gene expression.
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
Collective behavior of coupled nonuniform stochastic oscillators
NASA Astrophysics Data System (ADS)
Assis, Vladimir R. V.; Copelli, Mauro
2012-02-01
Theoretical studies of synchronization are usually based on models of coupled phase oscillators which, when isolated, have constant angular frequency. Stochastic discrete versions of these uniform oscillators have also appeared in the literature, with equal transition rates among the states. Here we start from the model recently introduced by Wood et al. [K. Wood, C. Van den Broeck, R. Kawai, K. Lindenberg, Universality of synchrony: critical behavior in a discrete model of stochastic phase-coupled oscillators, Phys. Rev. Lett. 96 (2006) 145701], which has a collectively synchronized phase, and parametrically modify the phase-coupled oscillators to render them (stochastically) nonuniform. We show that, depending on the nonuniformity parameter 0≤α≤1, a mean field analysis predicts the occurrence of several phase transitions. In particular, the phase with collective oscillations is stable for the complete graph only for α≤α‧<1. At α=1 the oscillators become excitable elements and the system has an absorbing state. In the excitable regime, no collective oscillations were found in the model.
Refahi, Yassin; Brunoud, Géraldine; Farcot, Etienne; Jean-Marie, Alain; Pulkkinen, Minna; Vernoux, Teva; Godin, Christophe
2016-01-01
Exploration of developmental mechanisms classically relies on analysis of pattern regularities. Whether disorders induced by biological noise may carry information on building principles of developmental systems is an important debated question. Here, we addressed theoretically this question using phyllotaxis, the geometric arrangement of plant aerial organs, as a model system. Phyllotaxis arises from reiterative organogenesis driven by lateral inhibitions at the shoot apex. Motivated by recurrent observations of disorders in phyllotaxis patterns, we revisited in depth the classical deterministic view of phyllotaxis. We developed a stochastic model of primordia initiation at the shoot apex, integrating locality and stochasticity in the patterning system. This stochastic model recapitulates phyllotactic patterns, both regular and irregular, and makes quantitative predictions on the nature of disorders arising from noise. We further show that disorders in phyllotaxis instruct us on the parameters governing phyllotaxis dynamics, thus that disorders can reveal biological watermarks of developmental systems. DOI: http://dx.doi.org/10.7554/eLife.14093.001 PMID:27380805
Extinction in neutrally stable stochastic Lotka-Volterra models
NASA Astrophysics Data System (ADS)
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Extinction in neutrally stable stochastic Lotka-Volterra models.
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Pricing foreign equity option under stochastic volatility tempered stable Lévy processes
NASA Astrophysics Data System (ADS)
Gong, Xiaoli; Zhuang, Xintian
2017-10-01
Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.
Stochastic Forecasting of Labor Supply and Population: An Integrated Model.
Fuchs, Johann; Söhnlein, Doris; Weber, Brigitte; Weber, Enzo
2018-01-01
This paper presents a stochastic model to forecast the German population and labor supply until 2060. Within a cohort-component approach, our population forecast applies principal components analysis to birth, mortality, emigration, and immigration rates, which allows for the reduction of dimensionality and accounts for correlation of the rates. Labor force participation rates are estimated by means of an econometric time series approach. All time series are forecast by stochastic simulation using the bootstrap method. As our model also distinguishes between German and foreign nationals, different developments in fertility, migration, and labor participation could be predicted. The results show that even rising birth rates and high levels of immigration cannot break the basic demographic trend in the long run. An important finding from an endogenous modeling of emigration rates is that high net migration in the long run will be difficult to achieve. Our stochastic perspective suggests therefore a high probability of substantially decreasing the labor supply in Germany.
Memristor-based neural networks: Synaptic versus neuronal stochasticity
NASA Astrophysics Data System (ADS)
Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil
2016-11-01
In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.
3D aquifer characterization using stochastic streamline calibration
NASA Astrophysics Data System (ADS)
Jang, Minchul
2007-03-01
In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.
The global dynamics for a stochastic SIS epidemic model with isolation
NASA Astrophysics Data System (ADS)
Chen, Yiliang; Wen, Buyu; Teng, Zhidong
2018-02-01
In this paper, we investigate the dynamical behavior for a stochastic SIS epidemic model with isolation which is as an important strategy for the elimination of infectious diseases. It is assumed that the stochastic effects manifest themselves mainly as fluctuation in the transmission coefficient, the death rate and the proportional coefficient of the isolation of infective. It is shown that the extinction and persistence in the mean of the model are determined by a threshold value R0S . That is, if R0S < 1, then disease dies out with probability one, and if R0S > 1, then the disease is stochastic persistent in the means with probability one. Furthermore, the existence of a unique stationary distribution is discussed, and the sufficient conditions are established by using the Lyapunov function method. Finally, some numerical examples are carried out to confirm the analytical results.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
A Stochastic Detection and Retrieval Model for the Study of Metacognition
ERIC Educational Resources Information Center
Jang, Yoonhee; Wallsten, Thomas S.; Huber, David E.
2012-01-01
We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to…
Birch regeneration: a stochastic model
William B. Leak
1968-01-01
The regeneration of a clearcutting with paper or yellow birch is expressed as an elementary stochastic (probabalistic) model that is computationally similar to an absorbing Markov chain. In the general case, the model contains 29 states beginning with the development of a flower (ament) and terminating with the abortion of a flower or seed, or the development of an...
The Total Field in Collective Bremsstrahlung in a Nonequilibrium Relativistic Beam-Plasma System.
1983-09-01
Of S"ANDARDS-1963-A L i o UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (1111bo Does Banero REPOR DOCUMENTATION PAGE READ INSTRUCTIONS r. REPORT...release; distribution unlimited. 17. DISTRIBUTION STATEMENT (of Me. absract mimed I Block it, different from Aepoft) 1S. SUPPLEMEMY1INANY NOTES kiDL...the nonradlative part of the field of the par- I ticls and a stochastic part corresponding to the bremsstrahlung radiation field. The relations
The Nucleation Rate of Single O2 Nanobubbles at Pt Nanoelectrodes.
Soto, Álvaro Moreno; German, Sean R; Ren, Hang; van der Meer, Devaraj; Lohse, Detlef; Edwards, Martin A; White, Henry S
2018-06-13
Nanobubble nucleation is a problem that affects efficiency in electrocatalytic reactions since those bubbles can block the surface of the catalytic sites. In this article, we focus on the nucleation rate of O 2 nanobubbles resulting from the electrooxidation of H 2 O 2 at Pt disk nanoelectrodes. Bubbles form almost instantaneously when a critical peak current, i nb p , is applied, but for lower currents, bubble nucleation is a stochastic process in which the nucleation (induction) time, t ind , dramatically decreases as the applied current approaches i nb p , a consequence of the local supersaturation level, ζ, increasing at high currents. Here, by applying different currents below i nb p , nanobubbles take some time to nucleate and block the surface of the Pt electrode at which the reaction occurs, providing a means to measure the stochastic t ind . We study in detail the different conditions in which nanobubbles appear, concluding that the electrode surface needs to be preconditioned to achieve reproducible results. We also measure the activation energy for bubble nucleation, E a , which varies in the range from (6 to 30) kT, and assuming a spherically cap-shaped nanobubble nucleus, we determine the footprint diameter L = 8-15 nm, the contact angle to the electrode surface θ = 135-155°, and the number of O 2 molecules contained in the nucleus (50 to 900 molecules).
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
NASA Astrophysics Data System (ADS)
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2014-12-01
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.
Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samat, N. A.; Ma'arof, S. H. Mohd Imam
This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Denguemore » and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.« less
Stochastic Models of Plant Diversity: Application to White Sands Missile Range
2000-02-01
decades and its models have been well developed. These models fall in the categories: dynamic models and stochastic models. In their book , Modeling...Gelb 1974), and dendro- climatology (Visser and Molenaar 1988). Optimal Estimation An optimal estimator is a computational algorithm that...Evaluation, M.B. Usher, ed., Chapman and Hall, London. Visser, H., and J. Molenaar . 1990. "Estimating Trends in Tree-ring Data." For. Sei. 36(1): 87
Fractional noise destroys or induces a stochastic bifurcation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qigui, E-mail: qgyang@scut.edu.cn; Zeng, Caibin, E-mail: zeng.cb@mail.scut.edu.cn; School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640
2013-12-15
Little seems to be known about the stochastic bifurcation phenomena of non-Markovian systems. Our intention in this paper is to understand such complex dynamics by a simple system, namely, the Black-Scholes model driven by a mixed fractional Brownian motion. The most interesting finding is that the multiplicative fractional noise not only destroys but also induces a stochastic bifurcation under some suitable conditions. So it opens a possible way to explore the theory of stochastic bifurcation in the non-Markovian framework.
Evolutionary Game Theory in Growing Populations
NASA Astrophysics Data System (ADS)
Melbinger, Anna; Cremer, Jonas; Frey, Erwin
2010-10-01
Existing theoretical models of evolution focus on the relative fitness advantages of different mutants in a population while the dynamic behavior of the population size is mostly left unconsidered. We present here a generic stochastic model which combines the growth dynamics of the population and its internal evolution. Our model thereby accounts for the fact that both evolutionary and growth dynamics are based on individual reproduction events and hence are highly coupled and stochastic in nature. We exemplify our approach by studying the dilemma of cooperation in growing populations and show that genuinely stochastic events can ease the dilemma by leading to a transient but robust increase in cooperation.
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
NASA Astrophysics Data System (ADS)
Watkins, Nicholas
2013-04-01
Stochastic modelling is of increasing importance, both specifically in climate science and more broadly across the whole of nonlinear geophysics. Traditionally, the noise components of such models would be spectrally white (delta-correlated) and Gaussian in amplitude, and their variance (first named by Fisher in 1918) would well characterise the likely size of fluctuations. Integration, for example in autoregressive models like AR(1), would redden a noise spectrum, while multiplication in turbulent cascades could greatly increase the range of fluctuation amplitudes, but such processes would still inherit aspects of their finite variance building blocks. In the 60s and 70s, however, Mandelbrot and others [see e.g. Watkins, GRL Frontiers, 2013] began to present evidence in nature for much stronger departures from Gaussianity (via very heavy tailed, infinite variance, distributions) and from white noise (through long range dependence (LRD) in time). He also observed intermittency, defined here as correlations between absolute magnitudes in some time series, in, for example, finance and turbulence. He proposed various models, including self-similar ones for heavy tails and LRD, and multifractal cascades for intermittency. In this presentation we compare contrasting types of model by looking at the "wild" events that they produce. The notion of a "wild" event can be made more precise in many ways, including by its duration in time, peak amplitude, and spatial extent. Our chosen measure will be the "burst", defined as the area of a time series above a fixed threshold. We will compare burst scaling in a self-similar, LRD, heavy tailed model (LFSM, e.g. Watkins et al, PRE, 2009] with our newer results for multifractal random walks [with M. Rypdal and O. Lovsletten], and for the heavy tailed extended version of the FARIMA (1,d,0) process, which combines long range dependence with the high frequency structure familiar from AR(1). We will also discuss the physical meaning of FARIMA and its potential as a modelling tool.
The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.
2014-01-01
In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.
2014-11-01
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.
A stochastic automata network for earthquake simulation and hazard estimation
NASA Astrophysics Data System (ADS)
Belubekian, Maya Ernest
1998-11-01
This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minier, Jean-Pierre, E-mail: Jean-Pierre.Minier@edf.fr; Chibbaro, Sergio; Pope, Stephen B.
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangianmore » stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.« less
Modeling animal movements using stochastic differential equations
Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie
2004-01-01
We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...
Nonlinear Stochastic Markov Processes and Modeling Uncertainty in Populations
2011-07-06
219–232. [26] I. Karatzas and S.E. Shreve, Brownian Motion and Stochastic Calculus, Second Edition, Springer, New York, 1991. [27] F. Klebaner...ubiquitous in mathematics and physics (e.g., particle transport, filtering), biology (population models), finance (e.g., Black-Scholes equations) among other
NASA Astrophysics Data System (ADS)
Wang, Xiao-Tian; Wu, Min; Zhou, Ze-Min; Jing, Wei-Shu
2012-02-01
This paper deals with the problem of discrete time option pricing using the fractional long memory stochastic volatility model with transaction costs. Through the 'anchoring and adjustment' argument in a discrete time setting, a European call option pricing formula is obtained.
NASA Astrophysics Data System (ADS)
Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter
2016-04-01
Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of actual forecasts with shorter lead times of up to 15 days shows the practical benefit of actual operational data. It appears that the use of stochastic optimization combined with ensemble forecasts leads to a significant higher level of flood protection without compromising the HPP's energy production.
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
Ozgul, Arpat; Armitage, Kenneth B; Blumstein, Daniel T; Vanvuren, Dirk H; Oli, Madan K
2006-01-01
1. The presence/absence of a species at a particular site is the simplest form of data that can be collected during ecological field studies. We used 13 years (1990-2002) of survey data to parameterize a stochastic patch occupancy model for a metapopulation of the yellow-bellied marmot in Colorado, and investigated the significance of particular patches and the influence of site quality, network characteristics and regional stochasticity on the metapopulation persistence. 2. Persistence of the yellow-bellied marmot metapopulation was strongly dependent on the high quality colony sites, and persistence probability was highly sensitive to small changes in the quality of these sites. 3. A relatively small number of colony sites was ultimately responsible for the regional persistence. However, lower quality satellite sites also made a significant contribution to long-term metapopulation persistence, especially when regional stochasticity was high. 4. The northern network of the marmot metapopulation was more stable compared to the southern network, and the persistence of the southern network depended heavily on the northern network. 5. Although complex models of metapopulation dynamics may provide a more accurate description of metapopulation dynamics, such models are data-intensive. Our study, one of the very few applications of stochastic patch occupancy models to a mammalian species, suggests that stochastic patch occupancy models can provide important insights into metapopulation dynamics using data that are easy to collect.